Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Hindawi Publishing CorporationISRN RoboticsVolume 2013 Article ID 173703 11 pageshttpdxdoiorg1054022013173703
Research ArticleComparative Study between Robust Control of RoboticManipulators by Static and Dynamic Neural Networks
Nadya Ghrab1 and Hichem Kallel2
1 National Institute of Applied Science and Technology (INSAT) Northern Urban Center Mailbox 676 1080 Tunis Tunisia2 Department of Physics and Electrical Engineering National Institute of Applied Science and Technology (INSAT) Tunisia
Correspondence should be addressed to Hichem Kallel goldenkgnettn
Received 28 January 2013 Accepted 20 March 2013
Academic Editors A Bechar A Sabanovic R Safaric K Terashima and C-C Tsai
Copyright copy 2013 N Ghrab and H Kallel This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited
A comparative study between static and dynamic neural networks for robotic systems control is considered So two approaches ofneural robot control were selected exposed and compared One uses a static neural network the other uses a dynamic neural net-work Both compensate the nonlinear modeling and uncertainties of robotic systems The first approach is direct it approximatesthe nonlinearities and uncertainties by a static neural network The second approach is indirect it uses a dynamic neural networkfor the identification of the robot state The neural network weight tuning algorithms for the two approaches are developed basedon Lyapunov theory Simulation results show that the system response equipped by dynamic neural network controller has bettertracking performance has faster response time and is more reliable to face disturbances and robotic uncertainties
1 Introduction
Several orders of neural robot control approaches have beenproposed in the literatureThese approaches are classified intotwo main classes direct and indirect neural controls If itrequires prior identification of the controlled process modelit is called indirect control otherwise it is called direct con-trol For the direct one many architectures of control arementioned in the literature [1ndash5] For the second class wecite neural control via dynamic neural network [6 7] ModelReference Adaptive Control (MRAC) [8ndash10] Internal ModelControl (IMC) [11ndash13] and predictive neural control [14 15]Both of these control classes are robust thanks to their abilityto overcome the nonlinearities and uncertainties in the robotdynamics
In this paper the aim is to compare the performance ofstatic neural networks to dynamic neural networks in roboticsystems control For this two types of control from thealready mentioned are selected presented and tested for atwo-link robot One uses a static neural network the otheruses a dynamic neural network The first approach is a directneural control for improvement of a classic controller propor-tional derivative (PD) proposed by Lewis [1] it manages to
approximate the nonlinearities and uncertainties in the robotdynamics by a static neural network The second approachis an indirect neural control via a high-order dynamic neuralnetwork proposed by Sanchez et al [7] whichmanages to usea dynamic neural network for a dynamic identification of therobot state Based on simulation results a comparative studybetween these two approaches is presented using differentperformance criteria
The rest of this paper is organized as follows Section 2presents the dynamic model of the robot manipulatorSection 3 describes the direct neural control proposed byLewis [1] Section 4 describes the indirect neural control pro-posed by Sanchez et al [7] Section 5 is intended for thesimulation results and a comparative study between the twoapproaches is mentioned in Sections 3 and 4 And finallySection 6 draws conclusion and sums up the whole paper
2 Dynamic Model of the Robot Manipulator
In this section the dynamic model of the robot manipulatoris presented The equation of the robot dynamics is
119869 (120579) + ℎ (120579 ) + 119866 (120579) + 119865 () + 120591119889 = 120591 (1)
2 ISRN Robotics
120579 isin R119899 denote the joint angle the joint velocity and thejoint acceleration 119869(120579) isin R119899 times 119899 denote the inertia matrixℎ(120579 ) isin R119899 times 119899 denote the Centrifugal and Coriolis forcematrix 119866(120579) isin R119899 denote the gravitational force vector119865() isin R119899 the friction term such as 119865() = 119865V + 119865119888() where119865119888() isin R119899 is the coulomb parameter 119865V isin R119899times119899 the viscousparameter 120591119889(119905) isin R
119899 represents disturbances and 120591(119905) isin R119899is the torque vector
3 Direct Neural Controller viaStatic Neural Network
In this section the approach of direct neural control forimprovement of a classic controller proportional-derivative(PD) proposed by Lewis [1] is briefly presented Thisapproach manages to approximate the nonlinearities anduncertainties in the robot dynamics by a static neuralnetwork
To make the dynamic of the robot manipulator definedin (1) follow a prescribed desired trajectory 120579119889(119905) isin R119899 thetracking error 119890(119905) and the filtered tracking error 119903(119905) are de-fined as follows
119890 = 120579119889 minus 120579 (2)
119903 = 119890 + Λ119890 (3)
Λ gt 0 is a symmetric positive definite design parametermatrix The dynamic of the robot (1) in terms of the filterederror (3) is as follows
119869 (120579) 119903 (119905) = minusℎ (120579 ) 119903 (119905) minus 120591 (119905) + 119891 (119909) + 120591119889 (119905) (4)
where the unknown nonlinear robot function is defined as
119891 (119909) = 119869 (120579) (119889 + Λ 119890) + ℎ (120579 ) (119889 + Λ119890)
+ 119866 (120579) + 119865 ()
(5)
with
119909 = [119890119879119890119879120579119889119879119889119879
119889119879]119879
(6)
31 Approximation of Nonlinearities and Uncertainties by aStaticNeuralNetwork Theuniversal FunctionApproximationProperty [16] Let119891(119909) be a general smooth function fromR119899
toR119898 Then it can be shown that as long as 119909 is restricted toa compact set 119878 ofR119899 there exist weights and thresholds suchthat one has
119891 (119909) = 119882119879120590 (119872119879119909) + 120576 (7)
It is difficult to determine the ideal neural network weightsin matrices119882 and119872 that are required to best approximatea given nonlinear function 119891(119909) However all one needs toknow for controls purposes that for a specified value of Neu-ral Network some ideal approximating weights exist Thenan estimate of 119891(119909) can be given by
(119909) = 119879
120590 (119879
119909) (8)
120590(middot)
120590(middot)
120590(middot)
120590(middot)
1199091
1199092
119909119898 119891119899
1198911
1198912
+
+
+
119872119895119896 1198821198941198951
2
3
119873
120579119872119895
Inputs OutputsHidden layer
Figure 1 The static feed forward neural network architecture
The neural network architecture proposed for the approx-imation of nonlinearities and uncertainties in the robotdynamics is shown in Figure 1 where 120590(sdot) R rarr R is theactivation functions and 119873 is the number of hidden-layerneurons The first-layer interconnection weights are denotedby119872119895119896 and the second-layer interconnection weights by119882119894119895The threshold offsets are denoted by 120579119872119895
32 Synthesis of the Control Law A general sort of approxi-mation-based controller is derived by setting
120591 (119905) = (119909) + 119896V119903 (119905) minus V (119905) (9)
with (119909) being the approximation of 119891(119909) by the neuralnetwork 119896V 119903(119905) an outer PD tracking loop and V(119905) an aux-iliary signal to provide robustness The proposed neural net-work control structure is shown in Figure 2
Substituting the control law (9) in (4) the closed-looperror dynamics become as follows
119869 (120579) 119903 (119905) = minus (119896V + ℎ (120579 )) 119903 (119905) + (119909) + 120591119889 (119905) + V (119905)
(10)
Let us define the functional approximation error are
= 119891 minus (11)
The weight approximation errors
= 119882 minus = 119872 minus (12)
The Lyapunov function proposed for the stabilization of theerror dynamic is
119881(119903 ) =1
2119903119879119869 (120579) 119903 +
1
2tr 119879119865minus1
+1
2tr 119879119866minus1
(13)
with any constant matrices being 119865 = 119865119879 gt 0 and 119866 = 119866119879 gt0
The robotic system is asymptotically stable if the follow-ing conditions which guarantee (119903 ) lt 0 are satisfied
ISRN Robotics 3
Figure 2 Direct neural controller via static neural network
(i) The neural network weight tuning algorithms are
119882 = 119865119903
119879minus 1198651015840119879
119909119903119879minus 119896119865 119903
119872 = 119866119909(
1015840119879
119903)119879
minus 119896119866 119903
(14)
With 119896 gt 0 being a small scalar design parameter and1205901015840 the Jacobian of 120590
(ii) The robustifying term is
V (119905) = minus119896119885 (119885119861 +1003817100381710038171003817100381710038171003817100381710038171003817119865) 119903 (15)
With 119885 = [119882 00 119872] 119885119865 lt 119885119861 such that 119885119861 is the
bound of ideal weights 119896119885 gt 0 is positive scalar para-meter
(iii) PD controller gain is
119896119881min gt1198620 + 119896 (119862
2
34)
119903 (16)
In practice the tracking error can be kept as small asdesired by increasing the gain 119896V
4 Indirect Neural Control via High-OrderDynamic Neural Network
In this section the second approach an indirect neural con-trol via a high-order dynamic neural network proposed bySanchez et al [7] is briefly presentedThis approachmanagesto use a dynamic neural network for the online identificationof the robot state
The proposed neural network control structure is shownin Figure 3
The equation of the robot dynamics defined in (1) understate representation is
= 119891 (119909) + 119892 (119883) 120591 (119905) (17)
119890119901Reference trajectory
Robot system
Dynamic neuralnetwork
Control law120591
++
Weight adaptation
119883119873
119883119903
119883
minus
Figure 3 Indirect neural control via a high-order dynamic neuralnetwork
with
119891 (119883) = [0 119868
0 0] [120579
]
+ [0
minus119869 (120579)minus1(ℎ (120579 ) + 119866 (120579) + 119865 () + 120591119889 (119905))
]
119892 (119883) = [0
119869 (120579)minus1]
119883 = [120579 ]119879
(18)
41 Identification of the Robot State by the High-Order Dyna-micNeural Network Theproposed neural network structurefor the identification of the robot state is shown in Figure 4119860119873 = minus1205821198941198682119899 times 2119899 is the state matrix of the neural network
with 120582119894 gt 0 for 119894 = 1 2119899 119883119873 = [1199091198731 sdot sdot sdot 1199091198732119899] isin R2119899
is the state vector of the neural network and 120591(119905) isin R119899 is thetorque vector
The dynamics of this neural network are resulted by thestate feedback 119883119873 around a neural structure formed by twostatic neural networks RN1 andRN2 shown in Figure 5 [7 17]
RN1 (119883119873) = 119882119891119885 (119883119873) (19)
119882119891 = [1199081198911 sdot sdot sdot 1199081198912119899]119879isin R2119899 times119871 is weights matrix of RN1
4 ISRN Robotics
int
119860119873
RN1(119883119873)
120591RN2(120591)
++
+
+119883119873
Figure 4 The high-order dynamic neural network architecture
119885(119883119873) = [1199111 sdot sdot sdot 119911119871]119879isin R119871 is nonlinear operator
which defines the high-order connections 119911119894 are called high-order connections and 119871 is the number of high orderconnections119911119894 = prod119895isin119868119894
119910119889(119894)
119895
119895 119895 = 1 2119899 119894 = 1 119871 and 119889(119894)
119895are
positive integers 119910119895 = 120593(119909119873119894) 119894 = 1 2119899 120593(sdot) R rarr R isthe activation function of RN1 here selected as the hyperbolictangent
The outputs of RN1 are denoted by 119862 = [1198881 sdot sdot sdot 1198882119899]119879
RN2 (120591) = 119882119892120601 (120591 (119905)) = 119882119892120591 (119905) (20)
120601(sdot) R rarr R is the activation function of RN2 here selectedas the linear one119882119892 = [1199081198921 sdot sdot sdot 1199081198922119899]
119879isin R2119899times119899 is weights matrix of
RN2The outputs of RN2 are denoted by119863 = [1198891 sdot sdot sdot 1198892119899]119879
Let us denote 119891 and 119892 to be the estimated value respec-tively for the unknown weight matrices119882119891 and119882119892
The weight estimation errors are
119891 = 119882119891 minus 119891 119892 = 119882119892 minus 119892 (21)
The equation of this neural network dynamics is defined by
119873 = 119860119873119883119873 +119882119891119885 (119883119873) + 119882119892120591 (119905) (22)
The identification of the robot dynamics by the neural net-work is ensured by the following pole placement
119873 minus = minus (119883119873 minus 119883) (23)
Equation (23) is equivalent to the following equation
= 119860119873119883119873 +119882119891119885 (119883119873) + 119882119892120591 (119905) + (119883119873 minus 119883) (24)
42 Synthesis of the Control Law It is desired to design arobust controller which enforces asymptotic stability of thetracking error between the system and the reference signalThe equation of the reference signal dynamic is
119903 = 119891119903 (119883119903 120591119903 (119905)) (25)
119883119903 is the reference signal state vector 120591119903(119905) is the desiredtorque vector and 119891119903(sdot) is the vector field for referencedynamics
Let us denote the tracking error between the system andthe reference signal to be
119890119901 = 119883 minus 119883119903 (26)
To ensure the desired dynamic the asymptotic stability of thetracking error must be ensured
The time derivative of (26) is
119890119901 = 119860119873119883119873 +119882119891119885 (119883119873) + 119882119892120591 (119905) + (119883119873 minus 119883) minus 119903
(27)
Now it is proceeded to add and subtract in (27) the terms119891119885(119883119903) 119860119873119890119901 119860119873119883119903 and 119883119903 so that
119890119901 = 119860119873119890119901 +119882119891119885 (119883119873) + 119882119892120591 (119905)
+ (minus119903 + 119860119873119883119903 + 119891119885 (119883119903) + 119883119903 minus 119883)
minus 119860119873119890119901 minus 119891119885 (119883119903) minus 119860119873119883119903 minus 119883119903 + 119883119873 + 119860119873119883119873
(28)
It is assumed that there exists a function 120572119903(119905 119891 119892) suchthat
120572119903 (119905 119891 119892)
= minus(119892)+
(minus119903 + 119860119873119883119903 + 119891119885 (119883119903) + 119883119903 minus 119883)
(29)
(119892)+ is the pseudo inverse of 119892 calculated as follows
(119892)+
= (119879
119892119892)minus1
119879
119892
Then it is proceeded to add and subtract the term119892120572119903(119905 119891 119892) in (28) so that we obtain
119890119901 = 119860119873119890119901 +119882119891119885 (119883119873) + 119882119892120591 (119905) minus 119892120572119903 (119905 119891 119892)
minus 119860119873 (119883 minus 119883119903) minus 119891119885 (119883119903) + (119860119873 + 119868) (119883119873 minus 119883119903)
(30)
Let us define
= 120591 minus 120572119903 (119905 119891 119892) = 1205911 + 1205912 (31)
so that (30) is reduced to
119890119901 = 119860119873119890119901 + 119891119885 (119883119873) + 119891 (119885 (119883119873) minus 119885 (119883119903))
+ 119892120591 (119905) + 119892 (119905) minus 119860119873 (119883 minus 119883119903)
+ (119860119873 + 119868) (119883119873 minus 119883119903)
(32)
Then it is proceeded to add and subtract the term 119885(119883) and119883 in (32) so that we obtain
119890119901 = 119860119873119890119901 + 119891119885 (119883119873)
+ 119891 (119885 (119883119873) minus 119885 (119883) + 119885 (119883) minus 119885 (119883119903))
+ 119892120591 (119905) + 119892 (119905) minus 119860119873 (119883 minus 119883119903)
+ (119860119873 + 119868) (119883119873 minus 119883 + 119883 minus 119883119903)
(33)
ISRN Robotics 5
InputsRN1119883119873
Subnet RN1 Subnet RN2
+
++
+
++
RN2Inputs
119882119892119894119895
120591
RN2Outputs
1198892119899
1198892
1198891
120591119899
1205912
1205911 120601(middot)
120601(middot)
120601(middot)RN1
120593(middot)
120593(middot)
120593(middot)
119911119871
1199112
1199111119882119891119894119895
Outputs
1198882119899
1198882
1198881
1199091198732119899
1199091198732
1199091198731
connections
order
high
119885(middot)
Figure 5 The two subnets RN1 and RN2 included in the dynamic neural network
Then by defining
1205911 = (119892)+
times (minus119891 (119885 (119883119873) minus 119885 (119883)) minus (119860119873 + 119868) (119883119873 minus 119883))
(34)
Equation (33) is reduced to
119890119901 = (119860119873 + 119868) 119890119901 + 119891119885 (119883119873) + 119891 (119885 (119883) minus 119885 (119883119903))
+ 119892120591 (119905) + 1198921205912 (119905)
(35)
Then the tracking problem is reduced to a stabilizationproblem of the error dynamics defined in (35)
The Control Lyapunov Function (CLF) proposed for thestabilization of the error dynamics is
119881(119890119901 119891 119892) =1
2
1003817100381710038171003817100381711989011990110038171003817100381710038171003817
2
+1
2tr 119891
119879
Γminus1119891
+1
2tr 119892
119879
Γminus1
119892119892
(36)
Γ = diag1205741 120574119871 and Γ119892 = diag1205741198921 120574119892119899 are sym-metric positive definite diagonal matrices
Let us define the function 120601119885 = 119885(119883) minus119885(119883119903) and 119871120601119885 tobe its Lipschitz constant
The robotic system is asymptotically stable if the follow-ing conditions which guarantee (119890119901 119891 119892) lt 0 aresatisfied
(i) The control law 1205912 is
1205912 = minus120583(119892)+
(1 + 1198712
120601119885
10038171003817100381710038171003817119891
10038171003817100381710038171003817
2
) 119890119901 (37)
Optimal with respect to the following cost [18]
119869 ()
= lim119905rarrinfin
2120573119881 + int
119905
0
(119897 (119890119901 119891 119892) + 120591119879
2119877 (119890119901 119891 119892) 1205912) 119889119905
(38)
(ii) The neural network weight tuning algorithms are119908119891119894119895 = minus119890119901119894120574119895119885(119883119895) RN1 weights tuning algorithm
119908119892119894119895 = minus119890119901119894120574119892119895 120591119895 (119905) RN2 weights tuning algorithm(39)
(iii) The parameter of the neural network state matrix is120582119894 gt 1
(iv) The parameter which manages the control law 1205912 is120583 gt 1
5 Simulation Results and Comparative Studyon a Two-Link Robot
In order to test the applicability and compare the per-formance of the two proposed neural control types thetrajectory tracking problem for a robot manipulators modelis considered The dynamics of a 2-link rigid robot arm on2D environment with friction and disturbance terms can bewritten as119869 (120579) + 119867 (120579 ) + 119866 (120579) + 119865 () + 120591119889 (119905) = 119863120591 (119905) (40)
with119869 (120579) = [
1198681 + 11989811198962
1+ 1198982119897
2
1119898211989711198962 cos (1205791 minus 1205792)
119898211989711198962 cos (1205791 minus 1205792) 1198682 + 11989821198962
2
]
119867 (120579 )
= ℎ (120579 )
= [0 119898211989711198962 sin (1205791 minus 1205792)
minus119898211989711198962 sin (1205791 minus 1205792) 0][[
[
2
1
2
2
]]
]
119866 (120579) = 119892 [(11989821198971 + 11989811198961) cos 1205791
11989821198962 cos 1205792]
119863 = [1 minus1
0 1]
119865 () = diag (2 2) + 15 sign () (41)
6 ISRN Robotics
The robot model parameters are shown in Table 1The simulations of the variation of positions and torques
exerted at each of the two joints as well as the weights of theneural network were carried out over a period of 10 seconds
The initial conditions are selected as follows
120579 (0) = [20 20]119879 (0) = [0 0]
119879 (42)
The reference signal is
120579119889 (119905) = [45 45]119879 119889 (119905) = [0 0]
119879 (43)
The external disturbances are[01 minus01] if 119905 le 1sec
[minus02 02] if 119905 gt 1sec(44)
A variation in the coefficients of viscous friction by anerror of 10 at time 119905 = 1 sec
A variation in the masses of the two bodies of the arm byan error of 10 at time 119905 = 2 sec
Theneural network controller parameters are selected foreach of the two approaches as follows
(i) For the first approach neural control for improve-ment of a classic controller proportional-derivative(PD) we have the following
After several simulation tests we have found suitablevalues for the initialization of the weights of the neural net-work and the various parameters as follows
Kv = 10 times 1198682 times 2 Kz = 01
ZB = 5 F = 10 times 1198686 times 6
G = 10 times 11986810 times 10 K = 01
(45)
The number of neurons in the hidden layer of neural networkisN = 6The activation function sigmoid is120590(119909) = 1(1+119890minus119909)and its Jacobian is 1205901015840(119909) = diag120590(119909) times [119868 minus diag120590(119909)]No initial neural network training phase was needed Theneural network weights were arbitrarily initialized at zero inthis simulation
(ii) For the second approach neural control via a high-order dynamic neural network we have the following
Initial state vector of the neural network is 119883119873 = [0 sdot sdot sdot0]119879 and the number of high-order connections is L = 13The activation function of the subnet RN1 is the hyper-
bolic tangent 120593(119883119873) = (1198902119883119873 minus 1)(119890
2119883119873 + 1)Let us have119883119873 = [1199091198731 sdot sdot sdot 1199091198734]
119879
1198731 = tangh (1199091198731) 1198732 = tangh (1199091198732)
1198733 = tangh (1199091198733) 1198734 = tangh (1199091198734) (46)
So that we define the high-order connections of the neuralnetworks119885 (119883119873) = [11987311198732119873311987341198731 times 11987321198731 times 11987331198731 times 1198734
1198732 times 11987331198732 times 11987341198733 times 11987341198731 times 1198732 times 1198733
1198731 times 1198732 times 11987341198732 times 1198733 times 1198734]119879
(47)
Table 1 Parameters of 2-link rigid robot model
Parameters Designation Value UnitLength of link 1 1198971 025 mLength of link 2 1198972 016 mMass of link 1 1198981 95 KgMass of link 2 1198982 5 KgPosition of center of gravity of link 1 1198961 0125 mPosition of center of gravity of link 2 1198962 008 mInertia of link 1 1198681 00043 Kgsdotm2
Inertia of link 2 1198682 00061 Kgsdotm2
Gravitational acceleration 119892 98 NsdotKgminus1
The activation function of the subnet RN2 is the linearfunction 120601(120591(119905)) = 120591(119905)
For this approach the initialization of neural networkweights is not arbitrary and a training phase is necessary
After several simulation tests we have found suitablevalues for the initialization of the weights of the neuralnetwork and the various parameters as follows
Γ = 001 times 11986813 times 13 Γg = 00001 times 1198682 times 2
120582i=300 AN = minus300 1198684 times 4
L120601Z = 002 120583 = 1000
(48)
The suitable values for the initialization of the weights areshown in Figure 11
51 Simulation Results of the First Approach of Control TheNeural Control for Improvement of a Classic Controller Propor-tional Derivative Each line that appears in both diagrams ofFigure 8 represents one variation of weight value119882119894119895 in theupdate of weight matrix 119882 or 119872119895119896 in the update of weightmatrix119872
Analysis Results The analysis of the simulation results of theresponse system equipped with NN controller for improve-ment of a classic controller PD seen in Figure 6 shows thatthis control law can satisfy the stability of the system despitethe presence of disturbances and robotic uncertainties
However due to disturbances and robotic uncertaintiesthe peak in the torques response makes this control strategynot reliable Figure 7
52 Simulation Results of the Second Approach of Control TheNeural Control via a High-Order Dynamic Neural NetworkFor this approach the initialization of neural networkweightsis not arbitrary and a training phase is necessary
(i)TheLearning Step Each line that appears in both diagramsof Figure 11 represents one variation of weight value 119882119891119894119895
ISRN Robotics 7
0 1 2 3 4 5 6 7 8 9 10
0
5
10
15
20
25
30
35
40
45
Time (s)
Join
t ang
le 1
(deg
)
First approach tracking performance joint 1
ActualDesired
minus5
(a)
ActualDesired
0 1 2 3 4 5 6 7 8 9 1020
25
30
35
40
45
Time (s)
Join
t ang
le 2
(deg
)
First approach tracking performance joint 2
(b)
Figure 6 Response of joint angles system equipped with NN controller for improvement of a classic controller PD
0 1 2 3 4 5 6 7 8 9 100
5
10
15
20
25
30
35
Time (s)
First approach torque 1 at joint 1
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
Time (s)
First approach torque 2 at joint 2
minus2
minus1
Torq
ue 2
(Nmiddotm
)
(b)
Figure 7 Response of torques in joint angles system equipped with NN controller for improvement of a classic controller PD
in the update of weight matrix119882119891 or119882119892119894119895 in the update ofweight matrix119882119892
Analysis Results The rigorous peak in the torques and jointangles response due to disturbances and robotic uncertain-ties seen in Figures 9 and 10 was able to be corrected thanksto the neural network adaptation
At the end of this learning step the best weight valuesseen in Figure 11 are obtained
(ii)Final SimulationResultsThe best weights values obtainedin the end of the learning step are used as initial values for thisstep
Each line that appears in both diagrams of Figure 14represents one variation of weight value 119882119891119894119895 in the update
of weight matrix119882119891 or119882119892119894119895 in the update of weight matrix119882119892
Analysis Results The analysis of the simulation results of theresponse system equipped with NN controller via a high-order dynamic neural network seen in Figure 12 shows thatthis control law can satisfy the stability of the system despitethe presence of disturbances and robotic uncertainties
The learning step is necessary to obtain the best weightvalues seen in Figure 11 which represent the true initialweight values
After the learning step and despite the presence of dis-turbances and robotic uncertainties no malfunction wasidentified in the torques response seen in Figure 13 (as thepeak in the torques response seen in Figure 7)
8 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 100
1
2
3
4
5
6
Time (s)
Wei
ghts119882119894119895
First approach update of weight119882
(a)
0 1 2 3 4 5 6 7 8 9 10
0
02
04
06
08
1
12
14
Time (s)
minus04
minus02
Wei
ghts119872119895119896
First approach update of weight119872
(b)
Figure 8 Weights update of the NN controller for improvement of a classic controller PD
0 1 2 3 4 5 6 7 8 9 10
0
20
40
60
80
100
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
minus40
minus20
(a)
0 1 2 3 4 5 6 7 8 9 100
20
40
60
80
100
120
140
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
ActualDesired
(b)
Figure 9 Response of joint angles system equipped with NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 10
0
200
400
600
800
Time (s)
Second approach torque 1 at joint 1
minus1000
minus800
minus600
minus400
minus200
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
200
400
600
800
1000
Time (s)
Second approach torque 2 at joint 2
minus400
minus200
Torq
ue 2
(Nmiddotm
)
(b)
Figure 10 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network (learning step)
ISRN Robotics 9
0 1 2 3 4 5 6 7 8 9 10
002040608
11214
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
Time (s) Time (s)Best weights values
minus06minus04
minus02
minus1
Wei
ghts119882119891119894119895
Wei
ghts119882119892119894119895
Second approach update of weights119882119891 Second approach update of weights 119882119892
Figure 11 Weights update of the NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 1010
15
20
25
30
35
40
45
50
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
(a)
ActualDesired
0 1 2 3 4 5 6 7 8 9 1020
25
30
35
40
45
50
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
(b)
Figure 12 Response of joint angles system equipped with NN controller via a high-order dynamic neural network
In Figure 14 it is easy to see that weights have reallyachieved their best values in the learning step and they remainnearly constant
53 Comparative Study The advantages and limitations ofeach approach of neural network control are presented inTable 2
The run-time performance of each approach is presentedin Table 3
6 Discussion and Conclusion
In this paper two approaches of neural network robot controlwere selected exposed and compared The aim of this
comparative study is to find the performance differencesbetween static and dynamic neural networks in roboticsystems control So one of these two approaches uses a staticneural network the other uses a dynamic neural networkThefirst approach is a direct neural control for improvement of aclassic controller proportional derivative (PD) proposed byLewis [1] it employs a static neural network to approximatethe nonlinearities and uncertainties in the robot dynamicsso that the static neural network is used to compensate thenonlinearities and uncertainties therefore it overcomes somelimitation of the conventional controller PD and improvesits accuracy The second approach is an indirect neuralcontrol via a high-order dynamic neural network proposedby Sanchez et al [7] it employs the dynamic neural networkfor an exact online identification of the robot state and then
10 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 10
050
100150200
Time (s)
Second approach torque 1 at joint 1
minus300minus250minus200minus150minus100minus50
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
50
100
150
Time (s)
Second approach torque 2 at joint 2
minus150
minus100
minus50Torq
ue 2
(Nmiddotm
)
(b)
Figure 13 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
02
04
06
08
1
12
14
minus06minus04
minus02
Wei
ghts119882119891119894119895
Second approach update of weight119882119891
(a)
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
1
2
3
4
5
6
7W
eigh
ts119882119892119894119895
Second approach update of weight119882119892
(b)
Figure 14 Weights update of the NN controller via a high-order dynamic neural network
Table 2 Advantages and limitations of the neural network control approaches
Control strategy Advantages Limitations
NN controller forimprovement of aclassic controller PD
(i) Initialization of the NN weights is arbitrary
No reliable response of the system seen in Figure 7due to the peak in the torques responsefacesdisturbances and robotic uncertainties
(ii) Online learning of the NN(iii) No offline training requirements(iv) Approximation by the neural network of thefunction which gathers the nonlinearities anduncertainties included in the robot dynamics(v) Overcoming some limitation of the conventionalcontroller PD(vi) Guaranteed stability in presence of nonlinearitiesand uncertainties
NN controller via ahigh order dynamicneural network
(i) An exact online identification of the robot statethanks to the dynamic neural network
(i) Initialization of the NN weights is not arbitrary(ii) Offline training requirements to find the suitableinitial NN weights values(ii) Guaranteed stability in presence of nonlinearities
and uncertainties(iii) Reliable response of the system faces disturbancesand robotic uncertainties (Figures 12 and 13)
ISRN Robotics 11
Table 3 Run-time performance of the neural network controlapproaches (on a time-domain 119905 = [0ndash10] sec)
Control strategy Elapsed time(in seconds)
NN controller for improvement of a classiccontroller PD 45262542
NN controller via a high-order dynamic neuralnetwork 44088512
it synthesizes the control law from this information recoveredby this identification
Simulation results under the framework MATLAB ofa two-link robot in 2D environment showed the perfor-mance differences between the two neural network controlapproaches studied Compared to the control with the staticneural network the neural control via dynamic neuralnetwork has significantly better tracking performance hasfaster response time and is more reliable to face disturbancesand robotic uncertainties However the indirect approachrequires offline training to find the suitable initial neuralnetwork weights values contrarily to the direct one in whichthe initialization of the neural network weights is arbitrary
Although it is clear that further experimentation needsto take place simulation results presented here indicate thatdynamic neural networks have demonstrated a very goodpotential for applications in closed loop control of robotmanipulators
References
[1] F L Lewis ldquoNeural network control of robot manipulatorsrdquoIEEE Expert vol 11 no 3 pp 64ndash75 1996
[2] W Zhang N Qi andH Yin ldquoPD control of robotmanipulatorswith uncertainties based on neural networkrdquo in Proceedings ofthe International Conference on Intelligent ComputationTechnol-ogy and Automation (ICICTA rsquo10) pp 884ndash888 May 2010
[3] Y H Kim F L Lewis and D M Dawson ldquoIntelligent optimalcontrol of robotic manipulators using neural networksrdquo Auto-matica vol 36 no 9 pp 1355ndash1364 2000
[4] Z Tang M Yang and Z Pei ldquoSelf-Adaptive PID controlstrategy based on RBF neural network for robot manipulatorrdquoin Proceedings of the 1st International Conference on PervasiveComputing Signal Processing and Applications (PCSPA rsquo10) pp932ndash935 September 2010
[5] D Popescu D Selisteanu and L Popescu ldquoNeural and adaptivecontrol of a rigid link manipulatorrdquo WSEAS Transactions onSystems vol 7 no 6 pp 632ndash641 2008
[6] M A Brdys and G J Kulawski ldquoStable adaptive control withrecurrent networksrdquo Automatica vol 36 no 1 pp 5ndash22 2000
[7] E N Sanchez L J Ricalde R Langari and D ShahmirzadildquoRollover control in heavy vehicles via recurrent high orderneural networksrdquo in Recurrent Neural Networks X Hu and PBalasubramaniam Eds Vienna Austria 2008
[8] F G Rossomando C Soria D Patino and R Carelli ldquoModelreference adaptive control for mobile robots in trajectorytracking using radial basis function neural networksrdquo LatinAmerican Applied Research vol 41 no 2 2010
[9] Z Pei Y Zhang and Z Tang ldquoModel reference adaptivePID control of hydraulic parallel robot based on RBF neural
networkrdquo in Proceedings of the IEEE International Conference onRobotics and Biomimetics (ROBIO rsquo07) pp 1383ndash1387 Decem-ber 2007
[10] M G Zhang andW H Li ldquoSingle neuron PIDmodel referenceadaptive control based on RBF neural networkrdquo in Proceedingsof the International Conference onMachine Learning and Cyber-netics pp 3021ndash3025 August 2006
[11] H X Li and H Deng ldquoAn approximate internal model-basedneural control for unknown nonlinear discrete processesrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 659ndash6702006
[12] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[13] C Kambhampati R J Craddock M Tham and K WarwickldquoInverse model control using recurrent networksrdquoMathematicsand Computers in Simulation vol 51 no 3-4 pp 181ndash199 2000
[14] MWang J Yu M Tan and Q Yang ldquoBack-propagation neuralnetwork based predictive control for biomimetic robotic fishrdquoin Proceedings of the 27th Chinese Control Conference (CCC rsquo08)pp 430ndash434 July 2008
[15] K Kara T E Missoum K E Hemsas and M L HadjilildquoControl of a robotic manipulator using neural network basedpredictive controlrdquo in Proceedings of the 17th IEEE InternationalConference on Electronics Circuits and Systems (ICECS rsquo10) pp1104ndash1107 December 2010
[16] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[17] E B Kosmatopoulos M A Christodoulou and P A IoannouldquoDynamical neural networks that ensure exponential identifi-cation error convergencerdquo Neural Networks vol 10 no 2 pp299ndash314 1997
[18] E N Sanchez J P Perez and L Ricalde ldquoRecurrent neuralcontrol for robot trajectory trackingrdquo in Proceedings of the 15thWorld Congress International Federation of Automatic ControlBarcelona Spain July 2002
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
2 ISRN Robotics
120579 isin R119899 denote the joint angle the joint velocity and thejoint acceleration 119869(120579) isin R119899 times 119899 denote the inertia matrixℎ(120579 ) isin R119899 times 119899 denote the Centrifugal and Coriolis forcematrix 119866(120579) isin R119899 denote the gravitational force vector119865() isin R119899 the friction term such as 119865() = 119865V + 119865119888() where119865119888() isin R119899 is the coulomb parameter 119865V isin R119899times119899 the viscousparameter 120591119889(119905) isin R
119899 represents disturbances and 120591(119905) isin R119899is the torque vector
3 Direct Neural Controller viaStatic Neural Network
In this section the approach of direct neural control forimprovement of a classic controller proportional-derivative(PD) proposed by Lewis [1] is briefly presented Thisapproach manages to approximate the nonlinearities anduncertainties in the robot dynamics by a static neuralnetwork
To make the dynamic of the robot manipulator definedin (1) follow a prescribed desired trajectory 120579119889(119905) isin R119899 thetracking error 119890(119905) and the filtered tracking error 119903(119905) are de-fined as follows
119890 = 120579119889 minus 120579 (2)
119903 = 119890 + Λ119890 (3)
Λ gt 0 is a symmetric positive definite design parametermatrix The dynamic of the robot (1) in terms of the filterederror (3) is as follows
119869 (120579) 119903 (119905) = minusℎ (120579 ) 119903 (119905) minus 120591 (119905) + 119891 (119909) + 120591119889 (119905) (4)
where the unknown nonlinear robot function is defined as
119891 (119909) = 119869 (120579) (119889 + Λ 119890) + ℎ (120579 ) (119889 + Λ119890)
+ 119866 (120579) + 119865 ()
(5)
with
119909 = [119890119879119890119879120579119889119879119889119879
119889119879]119879
(6)
31 Approximation of Nonlinearities and Uncertainties by aStaticNeuralNetwork Theuniversal FunctionApproximationProperty [16] Let119891(119909) be a general smooth function fromR119899
toR119898 Then it can be shown that as long as 119909 is restricted toa compact set 119878 ofR119899 there exist weights and thresholds suchthat one has
119891 (119909) = 119882119879120590 (119872119879119909) + 120576 (7)
It is difficult to determine the ideal neural network weightsin matrices119882 and119872 that are required to best approximatea given nonlinear function 119891(119909) However all one needs toknow for controls purposes that for a specified value of Neu-ral Network some ideal approximating weights exist Thenan estimate of 119891(119909) can be given by
(119909) = 119879
120590 (119879
119909) (8)
120590(middot)
120590(middot)
120590(middot)
120590(middot)
1199091
1199092
119909119898 119891119899
1198911
1198912
+
+
+
119872119895119896 1198821198941198951
2
3
119873
120579119872119895
Inputs OutputsHidden layer
Figure 1 The static feed forward neural network architecture
The neural network architecture proposed for the approx-imation of nonlinearities and uncertainties in the robotdynamics is shown in Figure 1 where 120590(sdot) R rarr R is theactivation functions and 119873 is the number of hidden-layerneurons The first-layer interconnection weights are denotedby119872119895119896 and the second-layer interconnection weights by119882119894119895The threshold offsets are denoted by 120579119872119895
32 Synthesis of the Control Law A general sort of approxi-mation-based controller is derived by setting
120591 (119905) = (119909) + 119896V119903 (119905) minus V (119905) (9)
with (119909) being the approximation of 119891(119909) by the neuralnetwork 119896V 119903(119905) an outer PD tracking loop and V(119905) an aux-iliary signal to provide robustness The proposed neural net-work control structure is shown in Figure 2
Substituting the control law (9) in (4) the closed-looperror dynamics become as follows
119869 (120579) 119903 (119905) = minus (119896V + ℎ (120579 )) 119903 (119905) + (119909) + 120591119889 (119905) + V (119905)
(10)
Let us define the functional approximation error are
= 119891 minus (11)
The weight approximation errors
= 119882 minus = 119872 minus (12)
The Lyapunov function proposed for the stabilization of theerror dynamic is
119881(119903 ) =1
2119903119879119869 (120579) 119903 +
1
2tr 119879119865minus1
+1
2tr 119879119866minus1
(13)
with any constant matrices being 119865 = 119865119879 gt 0 and 119866 = 119866119879 gt0
The robotic system is asymptotically stable if the follow-ing conditions which guarantee (119903 ) lt 0 are satisfied
ISRN Robotics 3
Figure 2 Direct neural controller via static neural network
(i) The neural network weight tuning algorithms are
119882 = 119865119903
119879minus 1198651015840119879
119909119903119879minus 119896119865 119903
119872 = 119866119909(
1015840119879
119903)119879
minus 119896119866 119903
(14)
With 119896 gt 0 being a small scalar design parameter and1205901015840 the Jacobian of 120590
(ii) The robustifying term is
V (119905) = minus119896119885 (119885119861 +1003817100381710038171003817100381710038171003817100381710038171003817119865) 119903 (15)
With 119885 = [119882 00 119872] 119885119865 lt 119885119861 such that 119885119861 is the
bound of ideal weights 119896119885 gt 0 is positive scalar para-meter
(iii) PD controller gain is
119896119881min gt1198620 + 119896 (119862
2
34)
119903 (16)
In practice the tracking error can be kept as small asdesired by increasing the gain 119896V
4 Indirect Neural Control via High-OrderDynamic Neural Network
In this section the second approach an indirect neural con-trol via a high-order dynamic neural network proposed bySanchez et al [7] is briefly presentedThis approachmanagesto use a dynamic neural network for the online identificationof the robot state
The proposed neural network control structure is shownin Figure 3
The equation of the robot dynamics defined in (1) understate representation is
= 119891 (119909) + 119892 (119883) 120591 (119905) (17)
119890119901Reference trajectory
Robot system
Dynamic neuralnetwork
Control law120591
++
Weight adaptation
119883119873
119883119903
119883
minus
Figure 3 Indirect neural control via a high-order dynamic neuralnetwork
with
119891 (119883) = [0 119868
0 0] [120579
]
+ [0
minus119869 (120579)minus1(ℎ (120579 ) + 119866 (120579) + 119865 () + 120591119889 (119905))
]
119892 (119883) = [0
119869 (120579)minus1]
119883 = [120579 ]119879
(18)
41 Identification of the Robot State by the High-Order Dyna-micNeural Network Theproposed neural network structurefor the identification of the robot state is shown in Figure 4119860119873 = minus1205821198941198682119899 times 2119899 is the state matrix of the neural network
with 120582119894 gt 0 for 119894 = 1 2119899 119883119873 = [1199091198731 sdot sdot sdot 1199091198732119899] isin R2119899
is the state vector of the neural network and 120591(119905) isin R119899 is thetorque vector
The dynamics of this neural network are resulted by thestate feedback 119883119873 around a neural structure formed by twostatic neural networks RN1 andRN2 shown in Figure 5 [7 17]
RN1 (119883119873) = 119882119891119885 (119883119873) (19)
119882119891 = [1199081198911 sdot sdot sdot 1199081198912119899]119879isin R2119899 times119871 is weights matrix of RN1
4 ISRN Robotics
int
119860119873
RN1(119883119873)
120591RN2(120591)
++
+
+119883119873
Figure 4 The high-order dynamic neural network architecture
119885(119883119873) = [1199111 sdot sdot sdot 119911119871]119879isin R119871 is nonlinear operator
which defines the high-order connections 119911119894 are called high-order connections and 119871 is the number of high orderconnections119911119894 = prod119895isin119868119894
119910119889(119894)
119895
119895 119895 = 1 2119899 119894 = 1 119871 and 119889(119894)
119895are
positive integers 119910119895 = 120593(119909119873119894) 119894 = 1 2119899 120593(sdot) R rarr R isthe activation function of RN1 here selected as the hyperbolictangent
The outputs of RN1 are denoted by 119862 = [1198881 sdot sdot sdot 1198882119899]119879
RN2 (120591) = 119882119892120601 (120591 (119905)) = 119882119892120591 (119905) (20)
120601(sdot) R rarr R is the activation function of RN2 here selectedas the linear one119882119892 = [1199081198921 sdot sdot sdot 1199081198922119899]
119879isin R2119899times119899 is weights matrix of
RN2The outputs of RN2 are denoted by119863 = [1198891 sdot sdot sdot 1198892119899]119879
Let us denote 119891 and 119892 to be the estimated value respec-tively for the unknown weight matrices119882119891 and119882119892
The weight estimation errors are
119891 = 119882119891 minus 119891 119892 = 119882119892 minus 119892 (21)
The equation of this neural network dynamics is defined by
119873 = 119860119873119883119873 +119882119891119885 (119883119873) + 119882119892120591 (119905) (22)
The identification of the robot dynamics by the neural net-work is ensured by the following pole placement
119873 minus = minus (119883119873 minus 119883) (23)
Equation (23) is equivalent to the following equation
= 119860119873119883119873 +119882119891119885 (119883119873) + 119882119892120591 (119905) + (119883119873 minus 119883) (24)
42 Synthesis of the Control Law It is desired to design arobust controller which enforces asymptotic stability of thetracking error between the system and the reference signalThe equation of the reference signal dynamic is
119903 = 119891119903 (119883119903 120591119903 (119905)) (25)
119883119903 is the reference signal state vector 120591119903(119905) is the desiredtorque vector and 119891119903(sdot) is the vector field for referencedynamics
Let us denote the tracking error between the system andthe reference signal to be
119890119901 = 119883 minus 119883119903 (26)
To ensure the desired dynamic the asymptotic stability of thetracking error must be ensured
The time derivative of (26) is
119890119901 = 119860119873119883119873 +119882119891119885 (119883119873) + 119882119892120591 (119905) + (119883119873 minus 119883) minus 119903
(27)
Now it is proceeded to add and subtract in (27) the terms119891119885(119883119903) 119860119873119890119901 119860119873119883119903 and 119883119903 so that
119890119901 = 119860119873119890119901 +119882119891119885 (119883119873) + 119882119892120591 (119905)
+ (minus119903 + 119860119873119883119903 + 119891119885 (119883119903) + 119883119903 minus 119883)
minus 119860119873119890119901 minus 119891119885 (119883119903) minus 119860119873119883119903 minus 119883119903 + 119883119873 + 119860119873119883119873
(28)
It is assumed that there exists a function 120572119903(119905 119891 119892) suchthat
120572119903 (119905 119891 119892)
= minus(119892)+
(minus119903 + 119860119873119883119903 + 119891119885 (119883119903) + 119883119903 minus 119883)
(29)
(119892)+ is the pseudo inverse of 119892 calculated as follows
(119892)+
= (119879
119892119892)minus1
119879
119892
Then it is proceeded to add and subtract the term119892120572119903(119905 119891 119892) in (28) so that we obtain
119890119901 = 119860119873119890119901 +119882119891119885 (119883119873) + 119882119892120591 (119905) minus 119892120572119903 (119905 119891 119892)
minus 119860119873 (119883 minus 119883119903) minus 119891119885 (119883119903) + (119860119873 + 119868) (119883119873 minus 119883119903)
(30)
Let us define
= 120591 minus 120572119903 (119905 119891 119892) = 1205911 + 1205912 (31)
so that (30) is reduced to
119890119901 = 119860119873119890119901 + 119891119885 (119883119873) + 119891 (119885 (119883119873) minus 119885 (119883119903))
+ 119892120591 (119905) + 119892 (119905) minus 119860119873 (119883 minus 119883119903)
+ (119860119873 + 119868) (119883119873 minus 119883119903)
(32)
Then it is proceeded to add and subtract the term 119885(119883) and119883 in (32) so that we obtain
119890119901 = 119860119873119890119901 + 119891119885 (119883119873)
+ 119891 (119885 (119883119873) minus 119885 (119883) + 119885 (119883) minus 119885 (119883119903))
+ 119892120591 (119905) + 119892 (119905) minus 119860119873 (119883 minus 119883119903)
+ (119860119873 + 119868) (119883119873 minus 119883 + 119883 minus 119883119903)
(33)
ISRN Robotics 5
InputsRN1119883119873
Subnet RN1 Subnet RN2
+
++
+
++
RN2Inputs
119882119892119894119895
120591
RN2Outputs
1198892119899
1198892
1198891
120591119899
1205912
1205911 120601(middot)
120601(middot)
120601(middot)RN1
120593(middot)
120593(middot)
120593(middot)
119911119871
1199112
1199111119882119891119894119895
Outputs
1198882119899
1198882
1198881
1199091198732119899
1199091198732
1199091198731
connections
order
high
119885(middot)
Figure 5 The two subnets RN1 and RN2 included in the dynamic neural network
Then by defining
1205911 = (119892)+
times (minus119891 (119885 (119883119873) minus 119885 (119883)) minus (119860119873 + 119868) (119883119873 minus 119883))
(34)
Equation (33) is reduced to
119890119901 = (119860119873 + 119868) 119890119901 + 119891119885 (119883119873) + 119891 (119885 (119883) minus 119885 (119883119903))
+ 119892120591 (119905) + 1198921205912 (119905)
(35)
Then the tracking problem is reduced to a stabilizationproblem of the error dynamics defined in (35)
The Control Lyapunov Function (CLF) proposed for thestabilization of the error dynamics is
119881(119890119901 119891 119892) =1
2
1003817100381710038171003817100381711989011990110038171003817100381710038171003817
2
+1
2tr 119891
119879
Γminus1119891
+1
2tr 119892
119879
Γminus1
119892119892
(36)
Γ = diag1205741 120574119871 and Γ119892 = diag1205741198921 120574119892119899 are sym-metric positive definite diagonal matrices
Let us define the function 120601119885 = 119885(119883) minus119885(119883119903) and 119871120601119885 tobe its Lipschitz constant
The robotic system is asymptotically stable if the follow-ing conditions which guarantee (119890119901 119891 119892) lt 0 aresatisfied
(i) The control law 1205912 is
1205912 = minus120583(119892)+
(1 + 1198712
120601119885
10038171003817100381710038171003817119891
10038171003817100381710038171003817
2
) 119890119901 (37)
Optimal with respect to the following cost [18]
119869 ()
= lim119905rarrinfin
2120573119881 + int
119905
0
(119897 (119890119901 119891 119892) + 120591119879
2119877 (119890119901 119891 119892) 1205912) 119889119905
(38)
(ii) The neural network weight tuning algorithms are119908119891119894119895 = minus119890119901119894120574119895119885(119883119895) RN1 weights tuning algorithm
119908119892119894119895 = minus119890119901119894120574119892119895 120591119895 (119905) RN2 weights tuning algorithm(39)
(iii) The parameter of the neural network state matrix is120582119894 gt 1
(iv) The parameter which manages the control law 1205912 is120583 gt 1
5 Simulation Results and Comparative Studyon a Two-Link Robot
In order to test the applicability and compare the per-formance of the two proposed neural control types thetrajectory tracking problem for a robot manipulators modelis considered The dynamics of a 2-link rigid robot arm on2D environment with friction and disturbance terms can bewritten as119869 (120579) + 119867 (120579 ) + 119866 (120579) + 119865 () + 120591119889 (119905) = 119863120591 (119905) (40)
with119869 (120579) = [
1198681 + 11989811198962
1+ 1198982119897
2
1119898211989711198962 cos (1205791 minus 1205792)
119898211989711198962 cos (1205791 minus 1205792) 1198682 + 11989821198962
2
]
119867 (120579 )
= ℎ (120579 )
= [0 119898211989711198962 sin (1205791 minus 1205792)
minus119898211989711198962 sin (1205791 minus 1205792) 0][[
[
2
1
2
2
]]
]
119866 (120579) = 119892 [(11989821198971 + 11989811198961) cos 1205791
11989821198962 cos 1205792]
119863 = [1 minus1
0 1]
119865 () = diag (2 2) + 15 sign () (41)
6 ISRN Robotics
The robot model parameters are shown in Table 1The simulations of the variation of positions and torques
exerted at each of the two joints as well as the weights of theneural network were carried out over a period of 10 seconds
The initial conditions are selected as follows
120579 (0) = [20 20]119879 (0) = [0 0]
119879 (42)
The reference signal is
120579119889 (119905) = [45 45]119879 119889 (119905) = [0 0]
119879 (43)
The external disturbances are[01 minus01] if 119905 le 1sec
[minus02 02] if 119905 gt 1sec(44)
A variation in the coefficients of viscous friction by anerror of 10 at time 119905 = 1 sec
A variation in the masses of the two bodies of the arm byan error of 10 at time 119905 = 2 sec
Theneural network controller parameters are selected foreach of the two approaches as follows
(i) For the first approach neural control for improve-ment of a classic controller proportional-derivative(PD) we have the following
After several simulation tests we have found suitablevalues for the initialization of the weights of the neural net-work and the various parameters as follows
Kv = 10 times 1198682 times 2 Kz = 01
ZB = 5 F = 10 times 1198686 times 6
G = 10 times 11986810 times 10 K = 01
(45)
The number of neurons in the hidden layer of neural networkisN = 6The activation function sigmoid is120590(119909) = 1(1+119890minus119909)and its Jacobian is 1205901015840(119909) = diag120590(119909) times [119868 minus diag120590(119909)]No initial neural network training phase was needed Theneural network weights were arbitrarily initialized at zero inthis simulation
(ii) For the second approach neural control via a high-order dynamic neural network we have the following
Initial state vector of the neural network is 119883119873 = [0 sdot sdot sdot0]119879 and the number of high-order connections is L = 13The activation function of the subnet RN1 is the hyper-
bolic tangent 120593(119883119873) = (1198902119883119873 minus 1)(119890
2119883119873 + 1)Let us have119883119873 = [1199091198731 sdot sdot sdot 1199091198734]
119879
1198731 = tangh (1199091198731) 1198732 = tangh (1199091198732)
1198733 = tangh (1199091198733) 1198734 = tangh (1199091198734) (46)
So that we define the high-order connections of the neuralnetworks119885 (119883119873) = [11987311198732119873311987341198731 times 11987321198731 times 11987331198731 times 1198734
1198732 times 11987331198732 times 11987341198733 times 11987341198731 times 1198732 times 1198733
1198731 times 1198732 times 11987341198732 times 1198733 times 1198734]119879
(47)
Table 1 Parameters of 2-link rigid robot model
Parameters Designation Value UnitLength of link 1 1198971 025 mLength of link 2 1198972 016 mMass of link 1 1198981 95 KgMass of link 2 1198982 5 KgPosition of center of gravity of link 1 1198961 0125 mPosition of center of gravity of link 2 1198962 008 mInertia of link 1 1198681 00043 Kgsdotm2
Inertia of link 2 1198682 00061 Kgsdotm2
Gravitational acceleration 119892 98 NsdotKgminus1
The activation function of the subnet RN2 is the linearfunction 120601(120591(119905)) = 120591(119905)
For this approach the initialization of neural networkweights is not arbitrary and a training phase is necessary
After several simulation tests we have found suitablevalues for the initialization of the weights of the neuralnetwork and the various parameters as follows
Γ = 001 times 11986813 times 13 Γg = 00001 times 1198682 times 2
120582i=300 AN = minus300 1198684 times 4
L120601Z = 002 120583 = 1000
(48)
The suitable values for the initialization of the weights areshown in Figure 11
51 Simulation Results of the First Approach of Control TheNeural Control for Improvement of a Classic Controller Propor-tional Derivative Each line that appears in both diagrams ofFigure 8 represents one variation of weight value119882119894119895 in theupdate of weight matrix 119882 or 119872119895119896 in the update of weightmatrix119872
Analysis Results The analysis of the simulation results of theresponse system equipped with NN controller for improve-ment of a classic controller PD seen in Figure 6 shows thatthis control law can satisfy the stability of the system despitethe presence of disturbances and robotic uncertainties
However due to disturbances and robotic uncertaintiesthe peak in the torques response makes this control strategynot reliable Figure 7
52 Simulation Results of the Second Approach of Control TheNeural Control via a High-Order Dynamic Neural NetworkFor this approach the initialization of neural networkweightsis not arbitrary and a training phase is necessary
(i)TheLearning Step Each line that appears in both diagramsof Figure 11 represents one variation of weight value 119882119891119894119895
ISRN Robotics 7
0 1 2 3 4 5 6 7 8 9 10
0
5
10
15
20
25
30
35
40
45
Time (s)
Join
t ang
le 1
(deg
)
First approach tracking performance joint 1
ActualDesired
minus5
(a)
ActualDesired
0 1 2 3 4 5 6 7 8 9 1020
25
30
35
40
45
Time (s)
Join
t ang
le 2
(deg
)
First approach tracking performance joint 2
(b)
Figure 6 Response of joint angles system equipped with NN controller for improvement of a classic controller PD
0 1 2 3 4 5 6 7 8 9 100
5
10
15
20
25
30
35
Time (s)
First approach torque 1 at joint 1
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
Time (s)
First approach torque 2 at joint 2
minus2
minus1
Torq
ue 2
(Nmiddotm
)
(b)
Figure 7 Response of torques in joint angles system equipped with NN controller for improvement of a classic controller PD
in the update of weight matrix119882119891 or119882119892119894119895 in the update ofweight matrix119882119892
Analysis Results The rigorous peak in the torques and jointangles response due to disturbances and robotic uncertain-ties seen in Figures 9 and 10 was able to be corrected thanksto the neural network adaptation
At the end of this learning step the best weight valuesseen in Figure 11 are obtained
(ii)Final SimulationResultsThe best weights values obtainedin the end of the learning step are used as initial values for thisstep
Each line that appears in both diagrams of Figure 14represents one variation of weight value 119882119891119894119895 in the update
of weight matrix119882119891 or119882119892119894119895 in the update of weight matrix119882119892
Analysis Results The analysis of the simulation results of theresponse system equipped with NN controller via a high-order dynamic neural network seen in Figure 12 shows thatthis control law can satisfy the stability of the system despitethe presence of disturbances and robotic uncertainties
The learning step is necessary to obtain the best weightvalues seen in Figure 11 which represent the true initialweight values
After the learning step and despite the presence of dis-turbances and robotic uncertainties no malfunction wasidentified in the torques response seen in Figure 13 (as thepeak in the torques response seen in Figure 7)
8 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 100
1
2
3
4
5
6
Time (s)
Wei
ghts119882119894119895
First approach update of weight119882
(a)
0 1 2 3 4 5 6 7 8 9 10
0
02
04
06
08
1
12
14
Time (s)
minus04
minus02
Wei
ghts119872119895119896
First approach update of weight119872
(b)
Figure 8 Weights update of the NN controller for improvement of a classic controller PD
0 1 2 3 4 5 6 7 8 9 10
0
20
40
60
80
100
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
minus40
minus20
(a)
0 1 2 3 4 5 6 7 8 9 100
20
40
60
80
100
120
140
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
ActualDesired
(b)
Figure 9 Response of joint angles system equipped with NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 10
0
200
400
600
800
Time (s)
Second approach torque 1 at joint 1
minus1000
minus800
minus600
minus400
minus200
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
200
400
600
800
1000
Time (s)
Second approach torque 2 at joint 2
minus400
minus200
Torq
ue 2
(Nmiddotm
)
(b)
Figure 10 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network (learning step)
ISRN Robotics 9
0 1 2 3 4 5 6 7 8 9 10
002040608
11214
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
Time (s) Time (s)Best weights values
minus06minus04
minus02
minus1
Wei
ghts119882119891119894119895
Wei
ghts119882119892119894119895
Second approach update of weights119882119891 Second approach update of weights 119882119892
Figure 11 Weights update of the NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 1010
15
20
25
30
35
40
45
50
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
(a)
ActualDesired
0 1 2 3 4 5 6 7 8 9 1020
25
30
35
40
45
50
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
(b)
Figure 12 Response of joint angles system equipped with NN controller via a high-order dynamic neural network
In Figure 14 it is easy to see that weights have reallyachieved their best values in the learning step and they remainnearly constant
53 Comparative Study The advantages and limitations ofeach approach of neural network control are presented inTable 2
The run-time performance of each approach is presentedin Table 3
6 Discussion and Conclusion
In this paper two approaches of neural network robot controlwere selected exposed and compared The aim of this
comparative study is to find the performance differencesbetween static and dynamic neural networks in roboticsystems control So one of these two approaches uses a staticneural network the other uses a dynamic neural networkThefirst approach is a direct neural control for improvement of aclassic controller proportional derivative (PD) proposed byLewis [1] it employs a static neural network to approximatethe nonlinearities and uncertainties in the robot dynamicsso that the static neural network is used to compensate thenonlinearities and uncertainties therefore it overcomes somelimitation of the conventional controller PD and improvesits accuracy The second approach is an indirect neuralcontrol via a high-order dynamic neural network proposedby Sanchez et al [7] it employs the dynamic neural networkfor an exact online identification of the robot state and then
10 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 10
050
100150200
Time (s)
Second approach torque 1 at joint 1
minus300minus250minus200minus150minus100minus50
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
50
100
150
Time (s)
Second approach torque 2 at joint 2
minus150
minus100
minus50Torq
ue 2
(Nmiddotm
)
(b)
Figure 13 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
02
04
06
08
1
12
14
minus06minus04
minus02
Wei
ghts119882119891119894119895
Second approach update of weight119882119891
(a)
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
1
2
3
4
5
6
7W
eigh
ts119882119892119894119895
Second approach update of weight119882119892
(b)
Figure 14 Weights update of the NN controller via a high-order dynamic neural network
Table 2 Advantages and limitations of the neural network control approaches
Control strategy Advantages Limitations
NN controller forimprovement of aclassic controller PD
(i) Initialization of the NN weights is arbitrary
No reliable response of the system seen in Figure 7due to the peak in the torques responsefacesdisturbances and robotic uncertainties
(ii) Online learning of the NN(iii) No offline training requirements(iv) Approximation by the neural network of thefunction which gathers the nonlinearities anduncertainties included in the robot dynamics(v) Overcoming some limitation of the conventionalcontroller PD(vi) Guaranteed stability in presence of nonlinearitiesand uncertainties
NN controller via ahigh order dynamicneural network
(i) An exact online identification of the robot statethanks to the dynamic neural network
(i) Initialization of the NN weights is not arbitrary(ii) Offline training requirements to find the suitableinitial NN weights values(ii) Guaranteed stability in presence of nonlinearities
and uncertainties(iii) Reliable response of the system faces disturbancesand robotic uncertainties (Figures 12 and 13)
ISRN Robotics 11
Table 3 Run-time performance of the neural network controlapproaches (on a time-domain 119905 = [0ndash10] sec)
Control strategy Elapsed time(in seconds)
NN controller for improvement of a classiccontroller PD 45262542
NN controller via a high-order dynamic neuralnetwork 44088512
it synthesizes the control law from this information recoveredby this identification
Simulation results under the framework MATLAB ofa two-link robot in 2D environment showed the perfor-mance differences between the two neural network controlapproaches studied Compared to the control with the staticneural network the neural control via dynamic neuralnetwork has significantly better tracking performance hasfaster response time and is more reliable to face disturbancesand robotic uncertainties However the indirect approachrequires offline training to find the suitable initial neuralnetwork weights values contrarily to the direct one in whichthe initialization of the neural network weights is arbitrary
Although it is clear that further experimentation needsto take place simulation results presented here indicate thatdynamic neural networks have demonstrated a very goodpotential for applications in closed loop control of robotmanipulators
References
[1] F L Lewis ldquoNeural network control of robot manipulatorsrdquoIEEE Expert vol 11 no 3 pp 64ndash75 1996
[2] W Zhang N Qi andH Yin ldquoPD control of robotmanipulatorswith uncertainties based on neural networkrdquo in Proceedings ofthe International Conference on Intelligent ComputationTechnol-ogy and Automation (ICICTA rsquo10) pp 884ndash888 May 2010
[3] Y H Kim F L Lewis and D M Dawson ldquoIntelligent optimalcontrol of robotic manipulators using neural networksrdquo Auto-matica vol 36 no 9 pp 1355ndash1364 2000
[4] Z Tang M Yang and Z Pei ldquoSelf-Adaptive PID controlstrategy based on RBF neural network for robot manipulatorrdquoin Proceedings of the 1st International Conference on PervasiveComputing Signal Processing and Applications (PCSPA rsquo10) pp932ndash935 September 2010
[5] D Popescu D Selisteanu and L Popescu ldquoNeural and adaptivecontrol of a rigid link manipulatorrdquo WSEAS Transactions onSystems vol 7 no 6 pp 632ndash641 2008
[6] M A Brdys and G J Kulawski ldquoStable adaptive control withrecurrent networksrdquo Automatica vol 36 no 1 pp 5ndash22 2000
[7] E N Sanchez L J Ricalde R Langari and D ShahmirzadildquoRollover control in heavy vehicles via recurrent high orderneural networksrdquo in Recurrent Neural Networks X Hu and PBalasubramaniam Eds Vienna Austria 2008
[8] F G Rossomando C Soria D Patino and R Carelli ldquoModelreference adaptive control for mobile robots in trajectorytracking using radial basis function neural networksrdquo LatinAmerican Applied Research vol 41 no 2 2010
[9] Z Pei Y Zhang and Z Tang ldquoModel reference adaptivePID control of hydraulic parallel robot based on RBF neural
networkrdquo in Proceedings of the IEEE International Conference onRobotics and Biomimetics (ROBIO rsquo07) pp 1383ndash1387 Decem-ber 2007
[10] M G Zhang andW H Li ldquoSingle neuron PIDmodel referenceadaptive control based on RBF neural networkrdquo in Proceedingsof the International Conference onMachine Learning and Cyber-netics pp 3021ndash3025 August 2006
[11] H X Li and H Deng ldquoAn approximate internal model-basedneural control for unknown nonlinear discrete processesrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 659ndash6702006
[12] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[13] C Kambhampati R J Craddock M Tham and K WarwickldquoInverse model control using recurrent networksrdquoMathematicsand Computers in Simulation vol 51 no 3-4 pp 181ndash199 2000
[14] MWang J Yu M Tan and Q Yang ldquoBack-propagation neuralnetwork based predictive control for biomimetic robotic fishrdquoin Proceedings of the 27th Chinese Control Conference (CCC rsquo08)pp 430ndash434 July 2008
[15] K Kara T E Missoum K E Hemsas and M L HadjilildquoControl of a robotic manipulator using neural network basedpredictive controlrdquo in Proceedings of the 17th IEEE InternationalConference on Electronics Circuits and Systems (ICECS rsquo10) pp1104ndash1107 December 2010
[16] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[17] E B Kosmatopoulos M A Christodoulou and P A IoannouldquoDynamical neural networks that ensure exponential identifi-cation error convergencerdquo Neural Networks vol 10 no 2 pp299ndash314 1997
[18] E N Sanchez J P Perez and L Ricalde ldquoRecurrent neuralcontrol for robot trajectory trackingrdquo in Proceedings of the 15thWorld Congress International Federation of Automatic ControlBarcelona Spain July 2002
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
ISRN Robotics 3
Figure 2 Direct neural controller via static neural network
(i) The neural network weight tuning algorithms are
119882 = 119865119903
119879minus 1198651015840119879
119909119903119879minus 119896119865 119903
119872 = 119866119909(
1015840119879
119903)119879
minus 119896119866 119903
(14)
With 119896 gt 0 being a small scalar design parameter and1205901015840 the Jacobian of 120590
(ii) The robustifying term is
V (119905) = minus119896119885 (119885119861 +1003817100381710038171003817100381710038171003817100381710038171003817119865) 119903 (15)
With 119885 = [119882 00 119872] 119885119865 lt 119885119861 such that 119885119861 is the
bound of ideal weights 119896119885 gt 0 is positive scalar para-meter
(iii) PD controller gain is
119896119881min gt1198620 + 119896 (119862
2
34)
119903 (16)
In practice the tracking error can be kept as small asdesired by increasing the gain 119896V
4 Indirect Neural Control via High-OrderDynamic Neural Network
In this section the second approach an indirect neural con-trol via a high-order dynamic neural network proposed bySanchez et al [7] is briefly presentedThis approachmanagesto use a dynamic neural network for the online identificationof the robot state
The proposed neural network control structure is shownin Figure 3
The equation of the robot dynamics defined in (1) understate representation is
= 119891 (119909) + 119892 (119883) 120591 (119905) (17)
119890119901Reference trajectory
Robot system
Dynamic neuralnetwork
Control law120591
++
Weight adaptation
119883119873
119883119903
119883
minus
Figure 3 Indirect neural control via a high-order dynamic neuralnetwork
with
119891 (119883) = [0 119868
0 0] [120579
]
+ [0
minus119869 (120579)minus1(ℎ (120579 ) + 119866 (120579) + 119865 () + 120591119889 (119905))
]
119892 (119883) = [0
119869 (120579)minus1]
119883 = [120579 ]119879
(18)
41 Identification of the Robot State by the High-Order Dyna-micNeural Network Theproposed neural network structurefor the identification of the robot state is shown in Figure 4119860119873 = minus1205821198941198682119899 times 2119899 is the state matrix of the neural network
with 120582119894 gt 0 for 119894 = 1 2119899 119883119873 = [1199091198731 sdot sdot sdot 1199091198732119899] isin R2119899
is the state vector of the neural network and 120591(119905) isin R119899 is thetorque vector
The dynamics of this neural network are resulted by thestate feedback 119883119873 around a neural structure formed by twostatic neural networks RN1 andRN2 shown in Figure 5 [7 17]
RN1 (119883119873) = 119882119891119885 (119883119873) (19)
119882119891 = [1199081198911 sdot sdot sdot 1199081198912119899]119879isin R2119899 times119871 is weights matrix of RN1
4 ISRN Robotics
int
119860119873
RN1(119883119873)
120591RN2(120591)
++
+
+119883119873
Figure 4 The high-order dynamic neural network architecture
119885(119883119873) = [1199111 sdot sdot sdot 119911119871]119879isin R119871 is nonlinear operator
which defines the high-order connections 119911119894 are called high-order connections and 119871 is the number of high orderconnections119911119894 = prod119895isin119868119894
119910119889(119894)
119895
119895 119895 = 1 2119899 119894 = 1 119871 and 119889(119894)
119895are
positive integers 119910119895 = 120593(119909119873119894) 119894 = 1 2119899 120593(sdot) R rarr R isthe activation function of RN1 here selected as the hyperbolictangent
The outputs of RN1 are denoted by 119862 = [1198881 sdot sdot sdot 1198882119899]119879
RN2 (120591) = 119882119892120601 (120591 (119905)) = 119882119892120591 (119905) (20)
120601(sdot) R rarr R is the activation function of RN2 here selectedas the linear one119882119892 = [1199081198921 sdot sdot sdot 1199081198922119899]
119879isin R2119899times119899 is weights matrix of
RN2The outputs of RN2 are denoted by119863 = [1198891 sdot sdot sdot 1198892119899]119879
Let us denote 119891 and 119892 to be the estimated value respec-tively for the unknown weight matrices119882119891 and119882119892
The weight estimation errors are
119891 = 119882119891 minus 119891 119892 = 119882119892 minus 119892 (21)
The equation of this neural network dynamics is defined by
119873 = 119860119873119883119873 +119882119891119885 (119883119873) + 119882119892120591 (119905) (22)
The identification of the robot dynamics by the neural net-work is ensured by the following pole placement
119873 minus = minus (119883119873 minus 119883) (23)
Equation (23) is equivalent to the following equation
= 119860119873119883119873 +119882119891119885 (119883119873) + 119882119892120591 (119905) + (119883119873 minus 119883) (24)
42 Synthesis of the Control Law It is desired to design arobust controller which enforces asymptotic stability of thetracking error between the system and the reference signalThe equation of the reference signal dynamic is
119903 = 119891119903 (119883119903 120591119903 (119905)) (25)
119883119903 is the reference signal state vector 120591119903(119905) is the desiredtorque vector and 119891119903(sdot) is the vector field for referencedynamics
Let us denote the tracking error between the system andthe reference signal to be
119890119901 = 119883 minus 119883119903 (26)
To ensure the desired dynamic the asymptotic stability of thetracking error must be ensured
The time derivative of (26) is
119890119901 = 119860119873119883119873 +119882119891119885 (119883119873) + 119882119892120591 (119905) + (119883119873 minus 119883) minus 119903
(27)
Now it is proceeded to add and subtract in (27) the terms119891119885(119883119903) 119860119873119890119901 119860119873119883119903 and 119883119903 so that
119890119901 = 119860119873119890119901 +119882119891119885 (119883119873) + 119882119892120591 (119905)
+ (minus119903 + 119860119873119883119903 + 119891119885 (119883119903) + 119883119903 minus 119883)
minus 119860119873119890119901 minus 119891119885 (119883119903) minus 119860119873119883119903 minus 119883119903 + 119883119873 + 119860119873119883119873
(28)
It is assumed that there exists a function 120572119903(119905 119891 119892) suchthat
120572119903 (119905 119891 119892)
= minus(119892)+
(minus119903 + 119860119873119883119903 + 119891119885 (119883119903) + 119883119903 minus 119883)
(29)
(119892)+ is the pseudo inverse of 119892 calculated as follows
(119892)+
= (119879
119892119892)minus1
119879
119892
Then it is proceeded to add and subtract the term119892120572119903(119905 119891 119892) in (28) so that we obtain
119890119901 = 119860119873119890119901 +119882119891119885 (119883119873) + 119882119892120591 (119905) minus 119892120572119903 (119905 119891 119892)
minus 119860119873 (119883 minus 119883119903) minus 119891119885 (119883119903) + (119860119873 + 119868) (119883119873 minus 119883119903)
(30)
Let us define
= 120591 minus 120572119903 (119905 119891 119892) = 1205911 + 1205912 (31)
so that (30) is reduced to
119890119901 = 119860119873119890119901 + 119891119885 (119883119873) + 119891 (119885 (119883119873) minus 119885 (119883119903))
+ 119892120591 (119905) + 119892 (119905) minus 119860119873 (119883 minus 119883119903)
+ (119860119873 + 119868) (119883119873 minus 119883119903)
(32)
Then it is proceeded to add and subtract the term 119885(119883) and119883 in (32) so that we obtain
119890119901 = 119860119873119890119901 + 119891119885 (119883119873)
+ 119891 (119885 (119883119873) minus 119885 (119883) + 119885 (119883) minus 119885 (119883119903))
+ 119892120591 (119905) + 119892 (119905) minus 119860119873 (119883 minus 119883119903)
+ (119860119873 + 119868) (119883119873 minus 119883 + 119883 minus 119883119903)
(33)
ISRN Robotics 5
InputsRN1119883119873
Subnet RN1 Subnet RN2
+
++
+
++
RN2Inputs
119882119892119894119895
120591
RN2Outputs
1198892119899
1198892
1198891
120591119899
1205912
1205911 120601(middot)
120601(middot)
120601(middot)RN1
120593(middot)
120593(middot)
120593(middot)
119911119871
1199112
1199111119882119891119894119895
Outputs
1198882119899
1198882
1198881
1199091198732119899
1199091198732
1199091198731
connections
order
high
119885(middot)
Figure 5 The two subnets RN1 and RN2 included in the dynamic neural network
Then by defining
1205911 = (119892)+
times (minus119891 (119885 (119883119873) minus 119885 (119883)) minus (119860119873 + 119868) (119883119873 minus 119883))
(34)
Equation (33) is reduced to
119890119901 = (119860119873 + 119868) 119890119901 + 119891119885 (119883119873) + 119891 (119885 (119883) minus 119885 (119883119903))
+ 119892120591 (119905) + 1198921205912 (119905)
(35)
Then the tracking problem is reduced to a stabilizationproblem of the error dynamics defined in (35)
The Control Lyapunov Function (CLF) proposed for thestabilization of the error dynamics is
119881(119890119901 119891 119892) =1
2
1003817100381710038171003817100381711989011990110038171003817100381710038171003817
2
+1
2tr 119891
119879
Γminus1119891
+1
2tr 119892
119879
Γminus1
119892119892
(36)
Γ = diag1205741 120574119871 and Γ119892 = diag1205741198921 120574119892119899 are sym-metric positive definite diagonal matrices
Let us define the function 120601119885 = 119885(119883) minus119885(119883119903) and 119871120601119885 tobe its Lipschitz constant
The robotic system is asymptotically stable if the follow-ing conditions which guarantee (119890119901 119891 119892) lt 0 aresatisfied
(i) The control law 1205912 is
1205912 = minus120583(119892)+
(1 + 1198712
120601119885
10038171003817100381710038171003817119891
10038171003817100381710038171003817
2
) 119890119901 (37)
Optimal with respect to the following cost [18]
119869 ()
= lim119905rarrinfin
2120573119881 + int
119905
0
(119897 (119890119901 119891 119892) + 120591119879
2119877 (119890119901 119891 119892) 1205912) 119889119905
(38)
(ii) The neural network weight tuning algorithms are119908119891119894119895 = minus119890119901119894120574119895119885(119883119895) RN1 weights tuning algorithm
119908119892119894119895 = minus119890119901119894120574119892119895 120591119895 (119905) RN2 weights tuning algorithm(39)
(iii) The parameter of the neural network state matrix is120582119894 gt 1
(iv) The parameter which manages the control law 1205912 is120583 gt 1
5 Simulation Results and Comparative Studyon a Two-Link Robot
In order to test the applicability and compare the per-formance of the two proposed neural control types thetrajectory tracking problem for a robot manipulators modelis considered The dynamics of a 2-link rigid robot arm on2D environment with friction and disturbance terms can bewritten as119869 (120579) + 119867 (120579 ) + 119866 (120579) + 119865 () + 120591119889 (119905) = 119863120591 (119905) (40)
with119869 (120579) = [
1198681 + 11989811198962
1+ 1198982119897
2
1119898211989711198962 cos (1205791 minus 1205792)
119898211989711198962 cos (1205791 minus 1205792) 1198682 + 11989821198962
2
]
119867 (120579 )
= ℎ (120579 )
= [0 119898211989711198962 sin (1205791 minus 1205792)
minus119898211989711198962 sin (1205791 minus 1205792) 0][[
[
2
1
2
2
]]
]
119866 (120579) = 119892 [(11989821198971 + 11989811198961) cos 1205791
11989821198962 cos 1205792]
119863 = [1 minus1
0 1]
119865 () = diag (2 2) + 15 sign () (41)
6 ISRN Robotics
The robot model parameters are shown in Table 1The simulations of the variation of positions and torques
exerted at each of the two joints as well as the weights of theneural network were carried out over a period of 10 seconds
The initial conditions are selected as follows
120579 (0) = [20 20]119879 (0) = [0 0]
119879 (42)
The reference signal is
120579119889 (119905) = [45 45]119879 119889 (119905) = [0 0]
119879 (43)
The external disturbances are[01 minus01] if 119905 le 1sec
[minus02 02] if 119905 gt 1sec(44)
A variation in the coefficients of viscous friction by anerror of 10 at time 119905 = 1 sec
A variation in the masses of the two bodies of the arm byan error of 10 at time 119905 = 2 sec
Theneural network controller parameters are selected foreach of the two approaches as follows
(i) For the first approach neural control for improve-ment of a classic controller proportional-derivative(PD) we have the following
After several simulation tests we have found suitablevalues for the initialization of the weights of the neural net-work and the various parameters as follows
Kv = 10 times 1198682 times 2 Kz = 01
ZB = 5 F = 10 times 1198686 times 6
G = 10 times 11986810 times 10 K = 01
(45)
The number of neurons in the hidden layer of neural networkisN = 6The activation function sigmoid is120590(119909) = 1(1+119890minus119909)and its Jacobian is 1205901015840(119909) = diag120590(119909) times [119868 minus diag120590(119909)]No initial neural network training phase was needed Theneural network weights were arbitrarily initialized at zero inthis simulation
(ii) For the second approach neural control via a high-order dynamic neural network we have the following
Initial state vector of the neural network is 119883119873 = [0 sdot sdot sdot0]119879 and the number of high-order connections is L = 13The activation function of the subnet RN1 is the hyper-
bolic tangent 120593(119883119873) = (1198902119883119873 minus 1)(119890
2119883119873 + 1)Let us have119883119873 = [1199091198731 sdot sdot sdot 1199091198734]
119879
1198731 = tangh (1199091198731) 1198732 = tangh (1199091198732)
1198733 = tangh (1199091198733) 1198734 = tangh (1199091198734) (46)
So that we define the high-order connections of the neuralnetworks119885 (119883119873) = [11987311198732119873311987341198731 times 11987321198731 times 11987331198731 times 1198734
1198732 times 11987331198732 times 11987341198733 times 11987341198731 times 1198732 times 1198733
1198731 times 1198732 times 11987341198732 times 1198733 times 1198734]119879
(47)
Table 1 Parameters of 2-link rigid robot model
Parameters Designation Value UnitLength of link 1 1198971 025 mLength of link 2 1198972 016 mMass of link 1 1198981 95 KgMass of link 2 1198982 5 KgPosition of center of gravity of link 1 1198961 0125 mPosition of center of gravity of link 2 1198962 008 mInertia of link 1 1198681 00043 Kgsdotm2
Inertia of link 2 1198682 00061 Kgsdotm2
Gravitational acceleration 119892 98 NsdotKgminus1
The activation function of the subnet RN2 is the linearfunction 120601(120591(119905)) = 120591(119905)
For this approach the initialization of neural networkweights is not arbitrary and a training phase is necessary
After several simulation tests we have found suitablevalues for the initialization of the weights of the neuralnetwork and the various parameters as follows
Γ = 001 times 11986813 times 13 Γg = 00001 times 1198682 times 2
120582i=300 AN = minus300 1198684 times 4
L120601Z = 002 120583 = 1000
(48)
The suitable values for the initialization of the weights areshown in Figure 11
51 Simulation Results of the First Approach of Control TheNeural Control for Improvement of a Classic Controller Propor-tional Derivative Each line that appears in both diagrams ofFigure 8 represents one variation of weight value119882119894119895 in theupdate of weight matrix 119882 or 119872119895119896 in the update of weightmatrix119872
Analysis Results The analysis of the simulation results of theresponse system equipped with NN controller for improve-ment of a classic controller PD seen in Figure 6 shows thatthis control law can satisfy the stability of the system despitethe presence of disturbances and robotic uncertainties
However due to disturbances and robotic uncertaintiesthe peak in the torques response makes this control strategynot reliable Figure 7
52 Simulation Results of the Second Approach of Control TheNeural Control via a High-Order Dynamic Neural NetworkFor this approach the initialization of neural networkweightsis not arbitrary and a training phase is necessary
(i)TheLearning Step Each line that appears in both diagramsof Figure 11 represents one variation of weight value 119882119891119894119895
ISRN Robotics 7
0 1 2 3 4 5 6 7 8 9 10
0
5
10
15
20
25
30
35
40
45
Time (s)
Join
t ang
le 1
(deg
)
First approach tracking performance joint 1
ActualDesired
minus5
(a)
ActualDesired
0 1 2 3 4 5 6 7 8 9 1020
25
30
35
40
45
Time (s)
Join
t ang
le 2
(deg
)
First approach tracking performance joint 2
(b)
Figure 6 Response of joint angles system equipped with NN controller for improvement of a classic controller PD
0 1 2 3 4 5 6 7 8 9 100
5
10
15
20
25
30
35
Time (s)
First approach torque 1 at joint 1
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
Time (s)
First approach torque 2 at joint 2
minus2
minus1
Torq
ue 2
(Nmiddotm
)
(b)
Figure 7 Response of torques in joint angles system equipped with NN controller for improvement of a classic controller PD
in the update of weight matrix119882119891 or119882119892119894119895 in the update ofweight matrix119882119892
Analysis Results The rigorous peak in the torques and jointangles response due to disturbances and robotic uncertain-ties seen in Figures 9 and 10 was able to be corrected thanksto the neural network adaptation
At the end of this learning step the best weight valuesseen in Figure 11 are obtained
(ii)Final SimulationResultsThe best weights values obtainedin the end of the learning step are used as initial values for thisstep
Each line that appears in both diagrams of Figure 14represents one variation of weight value 119882119891119894119895 in the update
of weight matrix119882119891 or119882119892119894119895 in the update of weight matrix119882119892
Analysis Results The analysis of the simulation results of theresponse system equipped with NN controller via a high-order dynamic neural network seen in Figure 12 shows thatthis control law can satisfy the stability of the system despitethe presence of disturbances and robotic uncertainties
The learning step is necessary to obtain the best weightvalues seen in Figure 11 which represent the true initialweight values
After the learning step and despite the presence of dis-turbances and robotic uncertainties no malfunction wasidentified in the torques response seen in Figure 13 (as thepeak in the torques response seen in Figure 7)
8 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 100
1
2
3
4
5
6
Time (s)
Wei
ghts119882119894119895
First approach update of weight119882
(a)
0 1 2 3 4 5 6 7 8 9 10
0
02
04
06
08
1
12
14
Time (s)
minus04
minus02
Wei
ghts119872119895119896
First approach update of weight119872
(b)
Figure 8 Weights update of the NN controller for improvement of a classic controller PD
0 1 2 3 4 5 6 7 8 9 10
0
20
40
60
80
100
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
minus40
minus20
(a)
0 1 2 3 4 5 6 7 8 9 100
20
40
60
80
100
120
140
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
ActualDesired
(b)
Figure 9 Response of joint angles system equipped with NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 10
0
200
400
600
800
Time (s)
Second approach torque 1 at joint 1
minus1000
minus800
minus600
minus400
minus200
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
200
400
600
800
1000
Time (s)
Second approach torque 2 at joint 2
minus400
minus200
Torq
ue 2
(Nmiddotm
)
(b)
Figure 10 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network (learning step)
ISRN Robotics 9
0 1 2 3 4 5 6 7 8 9 10
002040608
11214
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
Time (s) Time (s)Best weights values
minus06minus04
minus02
minus1
Wei
ghts119882119891119894119895
Wei
ghts119882119892119894119895
Second approach update of weights119882119891 Second approach update of weights 119882119892
Figure 11 Weights update of the NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 1010
15
20
25
30
35
40
45
50
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
(a)
ActualDesired
0 1 2 3 4 5 6 7 8 9 1020
25
30
35
40
45
50
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
(b)
Figure 12 Response of joint angles system equipped with NN controller via a high-order dynamic neural network
In Figure 14 it is easy to see that weights have reallyachieved their best values in the learning step and they remainnearly constant
53 Comparative Study The advantages and limitations ofeach approach of neural network control are presented inTable 2
The run-time performance of each approach is presentedin Table 3
6 Discussion and Conclusion
In this paper two approaches of neural network robot controlwere selected exposed and compared The aim of this
comparative study is to find the performance differencesbetween static and dynamic neural networks in roboticsystems control So one of these two approaches uses a staticneural network the other uses a dynamic neural networkThefirst approach is a direct neural control for improvement of aclassic controller proportional derivative (PD) proposed byLewis [1] it employs a static neural network to approximatethe nonlinearities and uncertainties in the robot dynamicsso that the static neural network is used to compensate thenonlinearities and uncertainties therefore it overcomes somelimitation of the conventional controller PD and improvesits accuracy The second approach is an indirect neuralcontrol via a high-order dynamic neural network proposedby Sanchez et al [7] it employs the dynamic neural networkfor an exact online identification of the robot state and then
10 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 10
050
100150200
Time (s)
Second approach torque 1 at joint 1
minus300minus250minus200minus150minus100minus50
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
50
100
150
Time (s)
Second approach torque 2 at joint 2
minus150
minus100
minus50Torq
ue 2
(Nmiddotm
)
(b)
Figure 13 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
02
04
06
08
1
12
14
minus06minus04
minus02
Wei
ghts119882119891119894119895
Second approach update of weight119882119891
(a)
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
1
2
3
4
5
6
7W
eigh
ts119882119892119894119895
Second approach update of weight119882119892
(b)
Figure 14 Weights update of the NN controller via a high-order dynamic neural network
Table 2 Advantages and limitations of the neural network control approaches
Control strategy Advantages Limitations
NN controller forimprovement of aclassic controller PD
(i) Initialization of the NN weights is arbitrary
No reliable response of the system seen in Figure 7due to the peak in the torques responsefacesdisturbances and robotic uncertainties
(ii) Online learning of the NN(iii) No offline training requirements(iv) Approximation by the neural network of thefunction which gathers the nonlinearities anduncertainties included in the robot dynamics(v) Overcoming some limitation of the conventionalcontroller PD(vi) Guaranteed stability in presence of nonlinearitiesand uncertainties
NN controller via ahigh order dynamicneural network
(i) An exact online identification of the robot statethanks to the dynamic neural network
(i) Initialization of the NN weights is not arbitrary(ii) Offline training requirements to find the suitableinitial NN weights values(ii) Guaranteed stability in presence of nonlinearities
and uncertainties(iii) Reliable response of the system faces disturbancesand robotic uncertainties (Figures 12 and 13)
ISRN Robotics 11
Table 3 Run-time performance of the neural network controlapproaches (on a time-domain 119905 = [0ndash10] sec)
Control strategy Elapsed time(in seconds)
NN controller for improvement of a classiccontroller PD 45262542
NN controller via a high-order dynamic neuralnetwork 44088512
it synthesizes the control law from this information recoveredby this identification
Simulation results under the framework MATLAB ofa two-link robot in 2D environment showed the perfor-mance differences between the two neural network controlapproaches studied Compared to the control with the staticneural network the neural control via dynamic neuralnetwork has significantly better tracking performance hasfaster response time and is more reliable to face disturbancesand robotic uncertainties However the indirect approachrequires offline training to find the suitable initial neuralnetwork weights values contrarily to the direct one in whichthe initialization of the neural network weights is arbitrary
Although it is clear that further experimentation needsto take place simulation results presented here indicate thatdynamic neural networks have demonstrated a very goodpotential for applications in closed loop control of robotmanipulators
References
[1] F L Lewis ldquoNeural network control of robot manipulatorsrdquoIEEE Expert vol 11 no 3 pp 64ndash75 1996
[2] W Zhang N Qi andH Yin ldquoPD control of robotmanipulatorswith uncertainties based on neural networkrdquo in Proceedings ofthe International Conference on Intelligent ComputationTechnol-ogy and Automation (ICICTA rsquo10) pp 884ndash888 May 2010
[3] Y H Kim F L Lewis and D M Dawson ldquoIntelligent optimalcontrol of robotic manipulators using neural networksrdquo Auto-matica vol 36 no 9 pp 1355ndash1364 2000
[4] Z Tang M Yang and Z Pei ldquoSelf-Adaptive PID controlstrategy based on RBF neural network for robot manipulatorrdquoin Proceedings of the 1st International Conference on PervasiveComputing Signal Processing and Applications (PCSPA rsquo10) pp932ndash935 September 2010
[5] D Popescu D Selisteanu and L Popescu ldquoNeural and adaptivecontrol of a rigid link manipulatorrdquo WSEAS Transactions onSystems vol 7 no 6 pp 632ndash641 2008
[6] M A Brdys and G J Kulawski ldquoStable adaptive control withrecurrent networksrdquo Automatica vol 36 no 1 pp 5ndash22 2000
[7] E N Sanchez L J Ricalde R Langari and D ShahmirzadildquoRollover control in heavy vehicles via recurrent high orderneural networksrdquo in Recurrent Neural Networks X Hu and PBalasubramaniam Eds Vienna Austria 2008
[8] F G Rossomando C Soria D Patino and R Carelli ldquoModelreference adaptive control for mobile robots in trajectorytracking using radial basis function neural networksrdquo LatinAmerican Applied Research vol 41 no 2 2010
[9] Z Pei Y Zhang and Z Tang ldquoModel reference adaptivePID control of hydraulic parallel robot based on RBF neural
networkrdquo in Proceedings of the IEEE International Conference onRobotics and Biomimetics (ROBIO rsquo07) pp 1383ndash1387 Decem-ber 2007
[10] M G Zhang andW H Li ldquoSingle neuron PIDmodel referenceadaptive control based on RBF neural networkrdquo in Proceedingsof the International Conference onMachine Learning and Cyber-netics pp 3021ndash3025 August 2006
[11] H X Li and H Deng ldquoAn approximate internal model-basedneural control for unknown nonlinear discrete processesrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 659ndash6702006
[12] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[13] C Kambhampati R J Craddock M Tham and K WarwickldquoInverse model control using recurrent networksrdquoMathematicsand Computers in Simulation vol 51 no 3-4 pp 181ndash199 2000
[14] MWang J Yu M Tan and Q Yang ldquoBack-propagation neuralnetwork based predictive control for biomimetic robotic fishrdquoin Proceedings of the 27th Chinese Control Conference (CCC rsquo08)pp 430ndash434 July 2008
[15] K Kara T E Missoum K E Hemsas and M L HadjilildquoControl of a robotic manipulator using neural network basedpredictive controlrdquo in Proceedings of the 17th IEEE InternationalConference on Electronics Circuits and Systems (ICECS rsquo10) pp1104ndash1107 December 2010
[16] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[17] E B Kosmatopoulos M A Christodoulou and P A IoannouldquoDynamical neural networks that ensure exponential identifi-cation error convergencerdquo Neural Networks vol 10 no 2 pp299ndash314 1997
[18] E N Sanchez J P Perez and L Ricalde ldquoRecurrent neuralcontrol for robot trajectory trackingrdquo in Proceedings of the 15thWorld Congress International Federation of Automatic ControlBarcelona Spain July 2002
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
4 ISRN Robotics
int
119860119873
RN1(119883119873)
120591RN2(120591)
++
+
+119883119873
Figure 4 The high-order dynamic neural network architecture
119885(119883119873) = [1199111 sdot sdot sdot 119911119871]119879isin R119871 is nonlinear operator
which defines the high-order connections 119911119894 are called high-order connections and 119871 is the number of high orderconnections119911119894 = prod119895isin119868119894
119910119889(119894)
119895
119895 119895 = 1 2119899 119894 = 1 119871 and 119889(119894)
119895are
positive integers 119910119895 = 120593(119909119873119894) 119894 = 1 2119899 120593(sdot) R rarr R isthe activation function of RN1 here selected as the hyperbolictangent
The outputs of RN1 are denoted by 119862 = [1198881 sdot sdot sdot 1198882119899]119879
RN2 (120591) = 119882119892120601 (120591 (119905)) = 119882119892120591 (119905) (20)
120601(sdot) R rarr R is the activation function of RN2 here selectedas the linear one119882119892 = [1199081198921 sdot sdot sdot 1199081198922119899]
119879isin R2119899times119899 is weights matrix of
RN2The outputs of RN2 are denoted by119863 = [1198891 sdot sdot sdot 1198892119899]119879
Let us denote 119891 and 119892 to be the estimated value respec-tively for the unknown weight matrices119882119891 and119882119892
The weight estimation errors are
119891 = 119882119891 minus 119891 119892 = 119882119892 minus 119892 (21)
The equation of this neural network dynamics is defined by
119873 = 119860119873119883119873 +119882119891119885 (119883119873) + 119882119892120591 (119905) (22)
The identification of the robot dynamics by the neural net-work is ensured by the following pole placement
119873 minus = minus (119883119873 minus 119883) (23)
Equation (23) is equivalent to the following equation
= 119860119873119883119873 +119882119891119885 (119883119873) + 119882119892120591 (119905) + (119883119873 minus 119883) (24)
42 Synthesis of the Control Law It is desired to design arobust controller which enforces asymptotic stability of thetracking error between the system and the reference signalThe equation of the reference signal dynamic is
119903 = 119891119903 (119883119903 120591119903 (119905)) (25)
119883119903 is the reference signal state vector 120591119903(119905) is the desiredtorque vector and 119891119903(sdot) is the vector field for referencedynamics
Let us denote the tracking error between the system andthe reference signal to be
119890119901 = 119883 minus 119883119903 (26)
To ensure the desired dynamic the asymptotic stability of thetracking error must be ensured
The time derivative of (26) is
119890119901 = 119860119873119883119873 +119882119891119885 (119883119873) + 119882119892120591 (119905) + (119883119873 minus 119883) minus 119903
(27)
Now it is proceeded to add and subtract in (27) the terms119891119885(119883119903) 119860119873119890119901 119860119873119883119903 and 119883119903 so that
119890119901 = 119860119873119890119901 +119882119891119885 (119883119873) + 119882119892120591 (119905)
+ (minus119903 + 119860119873119883119903 + 119891119885 (119883119903) + 119883119903 minus 119883)
minus 119860119873119890119901 minus 119891119885 (119883119903) minus 119860119873119883119903 minus 119883119903 + 119883119873 + 119860119873119883119873
(28)
It is assumed that there exists a function 120572119903(119905 119891 119892) suchthat
120572119903 (119905 119891 119892)
= minus(119892)+
(minus119903 + 119860119873119883119903 + 119891119885 (119883119903) + 119883119903 minus 119883)
(29)
(119892)+ is the pseudo inverse of 119892 calculated as follows
(119892)+
= (119879
119892119892)minus1
119879
119892
Then it is proceeded to add and subtract the term119892120572119903(119905 119891 119892) in (28) so that we obtain
119890119901 = 119860119873119890119901 +119882119891119885 (119883119873) + 119882119892120591 (119905) minus 119892120572119903 (119905 119891 119892)
minus 119860119873 (119883 minus 119883119903) minus 119891119885 (119883119903) + (119860119873 + 119868) (119883119873 minus 119883119903)
(30)
Let us define
= 120591 minus 120572119903 (119905 119891 119892) = 1205911 + 1205912 (31)
so that (30) is reduced to
119890119901 = 119860119873119890119901 + 119891119885 (119883119873) + 119891 (119885 (119883119873) minus 119885 (119883119903))
+ 119892120591 (119905) + 119892 (119905) minus 119860119873 (119883 minus 119883119903)
+ (119860119873 + 119868) (119883119873 minus 119883119903)
(32)
Then it is proceeded to add and subtract the term 119885(119883) and119883 in (32) so that we obtain
119890119901 = 119860119873119890119901 + 119891119885 (119883119873)
+ 119891 (119885 (119883119873) minus 119885 (119883) + 119885 (119883) minus 119885 (119883119903))
+ 119892120591 (119905) + 119892 (119905) minus 119860119873 (119883 minus 119883119903)
+ (119860119873 + 119868) (119883119873 minus 119883 + 119883 minus 119883119903)
(33)
ISRN Robotics 5
InputsRN1119883119873
Subnet RN1 Subnet RN2
+
++
+
++
RN2Inputs
119882119892119894119895
120591
RN2Outputs
1198892119899
1198892
1198891
120591119899
1205912
1205911 120601(middot)
120601(middot)
120601(middot)RN1
120593(middot)
120593(middot)
120593(middot)
119911119871
1199112
1199111119882119891119894119895
Outputs
1198882119899
1198882
1198881
1199091198732119899
1199091198732
1199091198731
connections
order
high
119885(middot)
Figure 5 The two subnets RN1 and RN2 included in the dynamic neural network
Then by defining
1205911 = (119892)+
times (minus119891 (119885 (119883119873) minus 119885 (119883)) minus (119860119873 + 119868) (119883119873 minus 119883))
(34)
Equation (33) is reduced to
119890119901 = (119860119873 + 119868) 119890119901 + 119891119885 (119883119873) + 119891 (119885 (119883) minus 119885 (119883119903))
+ 119892120591 (119905) + 1198921205912 (119905)
(35)
Then the tracking problem is reduced to a stabilizationproblem of the error dynamics defined in (35)
The Control Lyapunov Function (CLF) proposed for thestabilization of the error dynamics is
119881(119890119901 119891 119892) =1
2
1003817100381710038171003817100381711989011990110038171003817100381710038171003817
2
+1
2tr 119891
119879
Γminus1119891
+1
2tr 119892
119879
Γminus1
119892119892
(36)
Γ = diag1205741 120574119871 and Γ119892 = diag1205741198921 120574119892119899 are sym-metric positive definite diagonal matrices
Let us define the function 120601119885 = 119885(119883) minus119885(119883119903) and 119871120601119885 tobe its Lipschitz constant
The robotic system is asymptotically stable if the follow-ing conditions which guarantee (119890119901 119891 119892) lt 0 aresatisfied
(i) The control law 1205912 is
1205912 = minus120583(119892)+
(1 + 1198712
120601119885
10038171003817100381710038171003817119891
10038171003817100381710038171003817
2
) 119890119901 (37)
Optimal with respect to the following cost [18]
119869 ()
= lim119905rarrinfin
2120573119881 + int
119905
0
(119897 (119890119901 119891 119892) + 120591119879
2119877 (119890119901 119891 119892) 1205912) 119889119905
(38)
(ii) The neural network weight tuning algorithms are119908119891119894119895 = minus119890119901119894120574119895119885(119883119895) RN1 weights tuning algorithm
119908119892119894119895 = minus119890119901119894120574119892119895 120591119895 (119905) RN2 weights tuning algorithm(39)
(iii) The parameter of the neural network state matrix is120582119894 gt 1
(iv) The parameter which manages the control law 1205912 is120583 gt 1
5 Simulation Results and Comparative Studyon a Two-Link Robot
In order to test the applicability and compare the per-formance of the two proposed neural control types thetrajectory tracking problem for a robot manipulators modelis considered The dynamics of a 2-link rigid robot arm on2D environment with friction and disturbance terms can bewritten as119869 (120579) + 119867 (120579 ) + 119866 (120579) + 119865 () + 120591119889 (119905) = 119863120591 (119905) (40)
with119869 (120579) = [
1198681 + 11989811198962
1+ 1198982119897
2
1119898211989711198962 cos (1205791 minus 1205792)
119898211989711198962 cos (1205791 minus 1205792) 1198682 + 11989821198962
2
]
119867 (120579 )
= ℎ (120579 )
= [0 119898211989711198962 sin (1205791 minus 1205792)
minus119898211989711198962 sin (1205791 minus 1205792) 0][[
[
2
1
2
2
]]
]
119866 (120579) = 119892 [(11989821198971 + 11989811198961) cos 1205791
11989821198962 cos 1205792]
119863 = [1 minus1
0 1]
119865 () = diag (2 2) + 15 sign () (41)
6 ISRN Robotics
The robot model parameters are shown in Table 1The simulations of the variation of positions and torques
exerted at each of the two joints as well as the weights of theneural network were carried out over a period of 10 seconds
The initial conditions are selected as follows
120579 (0) = [20 20]119879 (0) = [0 0]
119879 (42)
The reference signal is
120579119889 (119905) = [45 45]119879 119889 (119905) = [0 0]
119879 (43)
The external disturbances are[01 minus01] if 119905 le 1sec
[minus02 02] if 119905 gt 1sec(44)
A variation in the coefficients of viscous friction by anerror of 10 at time 119905 = 1 sec
A variation in the masses of the two bodies of the arm byan error of 10 at time 119905 = 2 sec
Theneural network controller parameters are selected foreach of the two approaches as follows
(i) For the first approach neural control for improve-ment of a classic controller proportional-derivative(PD) we have the following
After several simulation tests we have found suitablevalues for the initialization of the weights of the neural net-work and the various parameters as follows
Kv = 10 times 1198682 times 2 Kz = 01
ZB = 5 F = 10 times 1198686 times 6
G = 10 times 11986810 times 10 K = 01
(45)
The number of neurons in the hidden layer of neural networkisN = 6The activation function sigmoid is120590(119909) = 1(1+119890minus119909)and its Jacobian is 1205901015840(119909) = diag120590(119909) times [119868 minus diag120590(119909)]No initial neural network training phase was needed Theneural network weights were arbitrarily initialized at zero inthis simulation
(ii) For the second approach neural control via a high-order dynamic neural network we have the following
Initial state vector of the neural network is 119883119873 = [0 sdot sdot sdot0]119879 and the number of high-order connections is L = 13The activation function of the subnet RN1 is the hyper-
bolic tangent 120593(119883119873) = (1198902119883119873 minus 1)(119890
2119883119873 + 1)Let us have119883119873 = [1199091198731 sdot sdot sdot 1199091198734]
119879
1198731 = tangh (1199091198731) 1198732 = tangh (1199091198732)
1198733 = tangh (1199091198733) 1198734 = tangh (1199091198734) (46)
So that we define the high-order connections of the neuralnetworks119885 (119883119873) = [11987311198732119873311987341198731 times 11987321198731 times 11987331198731 times 1198734
1198732 times 11987331198732 times 11987341198733 times 11987341198731 times 1198732 times 1198733
1198731 times 1198732 times 11987341198732 times 1198733 times 1198734]119879
(47)
Table 1 Parameters of 2-link rigid robot model
Parameters Designation Value UnitLength of link 1 1198971 025 mLength of link 2 1198972 016 mMass of link 1 1198981 95 KgMass of link 2 1198982 5 KgPosition of center of gravity of link 1 1198961 0125 mPosition of center of gravity of link 2 1198962 008 mInertia of link 1 1198681 00043 Kgsdotm2
Inertia of link 2 1198682 00061 Kgsdotm2
Gravitational acceleration 119892 98 NsdotKgminus1
The activation function of the subnet RN2 is the linearfunction 120601(120591(119905)) = 120591(119905)
For this approach the initialization of neural networkweights is not arbitrary and a training phase is necessary
After several simulation tests we have found suitablevalues for the initialization of the weights of the neuralnetwork and the various parameters as follows
Γ = 001 times 11986813 times 13 Γg = 00001 times 1198682 times 2
120582i=300 AN = minus300 1198684 times 4
L120601Z = 002 120583 = 1000
(48)
The suitable values for the initialization of the weights areshown in Figure 11
51 Simulation Results of the First Approach of Control TheNeural Control for Improvement of a Classic Controller Propor-tional Derivative Each line that appears in both diagrams ofFigure 8 represents one variation of weight value119882119894119895 in theupdate of weight matrix 119882 or 119872119895119896 in the update of weightmatrix119872
Analysis Results The analysis of the simulation results of theresponse system equipped with NN controller for improve-ment of a classic controller PD seen in Figure 6 shows thatthis control law can satisfy the stability of the system despitethe presence of disturbances and robotic uncertainties
However due to disturbances and robotic uncertaintiesthe peak in the torques response makes this control strategynot reliable Figure 7
52 Simulation Results of the Second Approach of Control TheNeural Control via a High-Order Dynamic Neural NetworkFor this approach the initialization of neural networkweightsis not arbitrary and a training phase is necessary
(i)TheLearning Step Each line that appears in both diagramsof Figure 11 represents one variation of weight value 119882119891119894119895
ISRN Robotics 7
0 1 2 3 4 5 6 7 8 9 10
0
5
10
15
20
25
30
35
40
45
Time (s)
Join
t ang
le 1
(deg
)
First approach tracking performance joint 1
ActualDesired
minus5
(a)
ActualDesired
0 1 2 3 4 5 6 7 8 9 1020
25
30
35
40
45
Time (s)
Join
t ang
le 2
(deg
)
First approach tracking performance joint 2
(b)
Figure 6 Response of joint angles system equipped with NN controller for improvement of a classic controller PD
0 1 2 3 4 5 6 7 8 9 100
5
10
15
20
25
30
35
Time (s)
First approach torque 1 at joint 1
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
Time (s)
First approach torque 2 at joint 2
minus2
minus1
Torq
ue 2
(Nmiddotm
)
(b)
Figure 7 Response of torques in joint angles system equipped with NN controller for improvement of a classic controller PD
in the update of weight matrix119882119891 or119882119892119894119895 in the update ofweight matrix119882119892
Analysis Results The rigorous peak in the torques and jointangles response due to disturbances and robotic uncertain-ties seen in Figures 9 and 10 was able to be corrected thanksto the neural network adaptation
At the end of this learning step the best weight valuesseen in Figure 11 are obtained
(ii)Final SimulationResultsThe best weights values obtainedin the end of the learning step are used as initial values for thisstep
Each line that appears in both diagrams of Figure 14represents one variation of weight value 119882119891119894119895 in the update
of weight matrix119882119891 or119882119892119894119895 in the update of weight matrix119882119892
Analysis Results The analysis of the simulation results of theresponse system equipped with NN controller via a high-order dynamic neural network seen in Figure 12 shows thatthis control law can satisfy the stability of the system despitethe presence of disturbances and robotic uncertainties
The learning step is necessary to obtain the best weightvalues seen in Figure 11 which represent the true initialweight values
After the learning step and despite the presence of dis-turbances and robotic uncertainties no malfunction wasidentified in the torques response seen in Figure 13 (as thepeak in the torques response seen in Figure 7)
8 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 100
1
2
3
4
5
6
Time (s)
Wei
ghts119882119894119895
First approach update of weight119882
(a)
0 1 2 3 4 5 6 7 8 9 10
0
02
04
06
08
1
12
14
Time (s)
minus04
minus02
Wei
ghts119872119895119896
First approach update of weight119872
(b)
Figure 8 Weights update of the NN controller for improvement of a classic controller PD
0 1 2 3 4 5 6 7 8 9 10
0
20
40
60
80
100
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
minus40
minus20
(a)
0 1 2 3 4 5 6 7 8 9 100
20
40
60
80
100
120
140
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
ActualDesired
(b)
Figure 9 Response of joint angles system equipped with NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 10
0
200
400
600
800
Time (s)
Second approach torque 1 at joint 1
minus1000
minus800
minus600
minus400
minus200
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
200
400
600
800
1000
Time (s)
Second approach torque 2 at joint 2
minus400
minus200
Torq
ue 2
(Nmiddotm
)
(b)
Figure 10 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network (learning step)
ISRN Robotics 9
0 1 2 3 4 5 6 7 8 9 10
002040608
11214
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
Time (s) Time (s)Best weights values
minus06minus04
minus02
minus1
Wei
ghts119882119891119894119895
Wei
ghts119882119892119894119895
Second approach update of weights119882119891 Second approach update of weights 119882119892
Figure 11 Weights update of the NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 1010
15
20
25
30
35
40
45
50
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
(a)
ActualDesired
0 1 2 3 4 5 6 7 8 9 1020
25
30
35
40
45
50
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
(b)
Figure 12 Response of joint angles system equipped with NN controller via a high-order dynamic neural network
In Figure 14 it is easy to see that weights have reallyachieved their best values in the learning step and they remainnearly constant
53 Comparative Study The advantages and limitations ofeach approach of neural network control are presented inTable 2
The run-time performance of each approach is presentedin Table 3
6 Discussion and Conclusion
In this paper two approaches of neural network robot controlwere selected exposed and compared The aim of this
comparative study is to find the performance differencesbetween static and dynamic neural networks in roboticsystems control So one of these two approaches uses a staticneural network the other uses a dynamic neural networkThefirst approach is a direct neural control for improvement of aclassic controller proportional derivative (PD) proposed byLewis [1] it employs a static neural network to approximatethe nonlinearities and uncertainties in the robot dynamicsso that the static neural network is used to compensate thenonlinearities and uncertainties therefore it overcomes somelimitation of the conventional controller PD and improvesits accuracy The second approach is an indirect neuralcontrol via a high-order dynamic neural network proposedby Sanchez et al [7] it employs the dynamic neural networkfor an exact online identification of the robot state and then
10 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 10
050
100150200
Time (s)
Second approach torque 1 at joint 1
minus300minus250minus200minus150minus100minus50
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
50
100
150
Time (s)
Second approach torque 2 at joint 2
minus150
minus100
minus50Torq
ue 2
(Nmiddotm
)
(b)
Figure 13 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
02
04
06
08
1
12
14
minus06minus04
minus02
Wei
ghts119882119891119894119895
Second approach update of weight119882119891
(a)
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
1
2
3
4
5
6
7W
eigh
ts119882119892119894119895
Second approach update of weight119882119892
(b)
Figure 14 Weights update of the NN controller via a high-order dynamic neural network
Table 2 Advantages and limitations of the neural network control approaches
Control strategy Advantages Limitations
NN controller forimprovement of aclassic controller PD
(i) Initialization of the NN weights is arbitrary
No reliable response of the system seen in Figure 7due to the peak in the torques responsefacesdisturbances and robotic uncertainties
(ii) Online learning of the NN(iii) No offline training requirements(iv) Approximation by the neural network of thefunction which gathers the nonlinearities anduncertainties included in the robot dynamics(v) Overcoming some limitation of the conventionalcontroller PD(vi) Guaranteed stability in presence of nonlinearitiesand uncertainties
NN controller via ahigh order dynamicneural network
(i) An exact online identification of the robot statethanks to the dynamic neural network
(i) Initialization of the NN weights is not arbitrary(ii) Offline training requirements to find the suitableinitial NN weights values(ii) Guaranteed stability in presence of nonlinearities
and uncertainties(iii) Reliable response of the system faces disturbancesand robotic uncertainties (Figures 12 and 13)
ISRN Robotics 11
Table 3 Run-time performance of the neural network controlapproaches (on a time-domain 119905 = [0ndash10] sec)
Control strategy Elapsed time(in seconds)
NN controller for improvement of a classiccontroller PD 45262542
NN controller via a high-order dynamic neuralnetwork 44088512
it synthesizes the control law from this information recoveredby this identification
Simulation results under the framework MATLAB ofa two-link robot in 2D environment showed the perfor-mance differences between the two neural network controlapproaches studied Compared to the control with the staticneural network the neural control via dynamic neuralnetwork has significantly better tracking performance hasfaster response time and is more reliable to face disturbancesand robotic uncertainties However the indirect approachrequires offline training to find the suitable initial neuralnetwork weights values contrarily to the direct one in whichthe initialization of the neural network weights is arbitrary
Although it is clear that further experimentation needsto take place simulation results presented here indicate thatdynamic neural networks have demonstrated a very goodpotential for applications in closed loop control of robotmanipulators
References
[1] F L Lewis ldquoNeural network control of robot manipulatorsrdquoIEEE Expert vol 11 no 3 pp 64ndash75 1996
[2] W Zhang N Qi andH Yin ldquoPD control of robotmanipulatorswith uncertainties based on neural networkrdquo in Proceedings ofthe International Conference on Intelligent ComputationTechnol-ogy and Automation (ICICTA rsquo10) pp 884ndash888 May 2010
[3] Y H Kim F L Lewis and D M Dawson ldquoIntelligent optimalcontrol of robotic manipulators using neural networksrdquo Auto-matica vol 36 no 9 pp 1355ndash1364 2000
[4] Z Tang M Yang and Z Pei ldquoSelf-Adaptive PID controlstrategy based on RBF neural network for robot manipulatorrdquoin Proceedings of the 1st International Conference on PervasiveComputing Signal Processing and Applications (PCSPA rsquo10) pp932ndash935 September 2010
[5] D Popescu D Selisteanu and L Popescu ldquoNeural and adaptivecontrol of a rigid link manipulatorrdquo WSEAS Transactions onSystems vol 7 no 6 pp 632ndash641 2008
[6] M A Brdys and G J Kulawski ldquoStable adaptive control withrecurrent networksrdquo Automatica vol 36 no 1 pp 5ndash22 2000
[7] E N Sanchez L J Ricalde R Langari and D ShahmirzadildquoRollover control in heavy vehicles via recurrent high orderneural networksrdquo in Recurrent Neural Networks X Hu and PBalasubramaniam Eds Vienna Austria 2008
[8] F G Rossomando C Soria D Patino and R Carelli ldquoModelreference adaptive control for mobile robots in trajectorytracking using radial basis function neural networksrdquo LatinAmerican Applied Research vol 41 no 2 2010
[9] Z Pei Y Zhang and Z Tang ldquoModel reference adaptivePID control of hydraulic parallel robot based on RBF neural
networkrdquo in Proceedings of the IEEE International Conference onRobotics and Biomimetics (ROBIO rsquo07) pp 1383ndash1387 Decem-ber 2007
[10] M G Zhang andW H Li ldquoSingle neuron PIDmodel referenceadaptive control based on RBF neural networkrdquo in Proceedingsof the International Conference onMachine Learning and Cyber-netics pp 3021ndash3025 August 2006
[11] H X Li and H Deng ldquoAn approximate internal model-basedneural control for unknown nonlinear discrete processesrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 659ndash6702006
[12] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[13] C Kambhampati R J Craddock M Tham and K WarwickldquoInverse model control using recurrent networksrdquoMathematicsand Computers in Simulation vol 51 no 3-4 pp 181ndash199 2000
[14] MWang J Yu M Tan and Q Yang ldquoBack-propagation neuralnetwork based predictive control for biomimetic robotic fishrdquoin Proceedings of the 27th Chinese Control Conference (CCC rsquo08)pp 430ndash434 July 2008
[15] K Kara T E Missoum K E Hemsas and M L HadjilildquoControl of a robotic manipulator using neural network basedpredictive controlrdquo in Proceedings of the 17th IEEE InternationalConference on Electronics Circuits and Systems (ICECS rsquo10) pp1104ndash1107 December 2010
[16] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[17] E B Kosmatopoulos M A Christodoulou and P A IoannouldquoDynamical neural networks that ensure exponential identifi-cation error convergencerdquo Neural Networks vol 10 no 2 pp299ndash314 1997
[18] E N Sanchez J P Perez and L Ricalde ldquoRecurrent neuralcontrol for robot trajectory trackingrdquo in Proceedings of the 15thWorld Congress International Federation of Automatic ControlBarcelona Spain July 2002
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
ISRN Robotics 5
InputsRN1119883119873
Subnet RN1 Subnet RN2
+
++
+
++
RN2Inputs
119882119892119894119895
120591
RN2Outputs
1198892119899
1198892
1198891
120591119899
1205912
1205911 120601(middot)
120601(middot)
120601(middot)RN1
120593(middot)
120593(middot)
120593(middot)
119911119871
1199112
1199111119882119891119894119895
Outputs
1198882119899
1198882
1198881
1199091198732119899
1199091198732
1199091198731
connections
order
high
119885(middot)
Figure 5 The two subnets RN1 and RN2 included in the dynamic neural network
Then by defining
1205911 = (119892)+
times (minus119891 (119885 (119883119873) minus 119885 (119883)) minus (119860119873 + 119868) (119883119873 minus 119883))
(34)
Equation (33) is reduced to
119890119901 = (119860119873 + 119868) 119890119901 + 119891119885 (119883119873) + 119891 (119885 (119883) minus 119885 (119883119903))
+ 119892120591 (119905) + 1198921205912 (119905)
(35)
Then the tracking problem is reduced to a stabilizationproblem of the error dynamics defined in (35)
The Control Lyapunov Function (CLF) proposed for thestabilization of the error dynamics is
119881(119890119901 119891 119892) =1
2
1003817100381710038171003817100381711989011990110038171003817100381710038171003817
2
+1
2tr 119891
119879
Γminus1119891
+1
2tr 119892
119879
Γminus1
119892119892
(36)
Γ = diag1205741 120574119871 and Γ119892 = diag1205741198921 120574119892119899 are sym-metric positive definite diagonal matrices
Let us define the function 120601119885 = 119885(119883) minus119885(119883119903) and 119871120601119885 tobe its Lipschitz constant
The robotic system is asymptotically stable if the follow-ing conditions which guarantee (119890119901 119891 119892) lt 0 aresatisfied
(i) The control law 1205912 is
1205912 = minus120583(119892)+
(1 + 1198712
120601119885
10038171003817100381710038171003817119891
10038171003817100381710038171003817
2
) 119890119901 (37)
Optimal with respect to the following cost [18]
119869 ()
= lim119905rarrinfin
2120573119881 + int
119905
0
(119897 (119890119901 119891 119892) + 120591119879
2119877 (119890119901 119891 119892) 1205912) 119889119905
(38)
(ii) The neural network weight tuning algorithms are119908119891119894119895 = minus119890119901119894120574119895119885(119883119895) RN1 weights tuning algorithm
119908119892119894119895 = minus119890119901119894120574119892119895 120591119895 (119905) RN2 weights tuning algorithm(39)
(iii) The parameter of the neural network state matrix is120582119894 gt 1
(iv) The parameter which manages the control law 1205912 is120583 gt 1
5 Simulation Results and Comparative Studyon a Two-Link Robot
In order to test the applicability and compare the per-formance of the two proposed neural control types thetrajectory tracking problem for a robot manipulators modelis considered The dynamics of a 2-link rigid robot arm on2D environment with friction and disturbance terms can bewritten as119869 (120579) + 119867 (120579 ) + 119866 (120579) + 119865 () + 120591119889 (119905) = 119863120591 (119905) (40)
with119869 (120579) = [
1198681 + 11989811198962
1+ 1198982119897
2
1119898211989711198962 cos (1205791 minus 1205792)
119898211989711198962 cos (1205791 minus 1205792) 1198682 + 11989821198962
2
]
119867 (120579 )
= ℎ (120579 )
= [0 119898211989711198962 sin (1205791 minus 1205792)
minus119898211989711198962 sin (1205791 minus 1205792) 0][[
[
2
1
2
2
]]
]
119866 (120579) = 119892 [(11989821198971 + 11989811198961) cos 1205791
11989821198962 cos 1205792]
119863 = [1 minus1
0 1]
119865 () = diag (2 2) + 15 sign () (41)
6 ISRN Robotics
The robot model parameters are shown in Table 1The simulations of the variation of positions and torques
exerted at each of the two joints as well as the weights of theneural network were carried out over a period of 10 seconds
The initial conditions are selected as follows
120579 (0) = [20 20]119879 (0) = [0 0]
119879 (42)
The reference signal is
120579119889 (119905) = [45 45]119879 119889 (119905) = [0 0]
119879 (43)
The external disturbances are[01 minus01] if 119905 le 1sec
[minus02 02] if 119905 gt 1sec(44)
A variation in the coefficients of viscous friction by anerror of 10 at time 119905 = 1 sec
A variation in the masses of the two bodies of the arm byan error of 10 at time 119905 = 2 sec
Theneural network controller parameters are selected foreach of the two approaches as follows
(i) For the first approach neural control for improve-ment of a classic controller proportional-derivative(PD) we have the following
After several simulation tests we have found suitablevalues for the initialization of the weights of the neural net-work and the various parameters as follows
Kv = 10 times 1198682 times 2 Kz = 01
ZB = 5 F = 10 times 1198686 times 6
G = 10 times 11986810 times 10 K = 01
(45)
The number of neurons in the hidden layer of neural networkisN = 6The activation function sigmoid is120590(119909) = 1(1+119890minus119909)and its Jacobian is 1205901015840(119909) = diag120590(119909) times [119868 minus diag120590(119909)]No initial neural network training phase was needed Theneural network weights were arbitrarily initialized at zero inthis simulation
(ii) For the second approach neural control via a high-order dynamic neural network we have the following
Initial state vector of the neural network is 119883119873 = [0 sdot sdot sdot0]119879 and the number of high-order connections is L = 13The activation function of the subnet RN1 is the hyper-
bolic tangent 120593(119883119873) = (1198902119883119873 minus 1)(119890
2119883119873 + 1)Let us have119883119873 = [1199091198731 sdot sdot sdot 1199091198734]
119879
1198731 = tangh (1199091198731) 1198732 = tangh (1199091198732)
1198733 = tangh (1199091198733) 1198734 = tangh (1199091198734) (46)
So that we define the high-order connections of the neuralnetworks119885 (119883119873) = [11987311198732119873311987341198731 times 11987321198731 times 11987331198731 times 1198734
1198732 times 11987331198732 times 11987341198733 times 11987341198731 times 1198732 times 1198733
1198731 times 1198732 times 11987341198732 times 1198733 times 1198734]119879
(47)
Table 1 Parameters of 2-link rigid robot model
Parameters Designation Value UnitLength of link 1 1198971 025 mLength of link 2 1198972 016 mMass of link 1 1198981 95 KgMass of link 2 1198982 5 KgPosition of center of gravity of link 1 1198961 0125 mPosition of center of gravity of link 2 1198962 008 mInertia of link 1 1198681 00043 Kgsdotm2
Inertia of link 2 1198682 00061 Kgsdotm2
Gravitational acceleration 119892 98 NsdotKgminus1
The activation function of the subnet RN2 is the linearfunction 120601(120591(119905)) = 120591(119905)
For this approach the initialization of neural networkweights is not arbitrary and a training phase is necessary
After several simulation tests we have found suitablevalues for the initialization of the weights of the neuralnetwork and the various parameters as follows
Γ = 001 times 11986813 times 13 Γg = 00001 times 1198682 times 2
120582i=300 AN = minus300 1198684 times 4
L120601Z = 002 120583 = 1000
(48)
The suitable values for the initialization of the weights areshown in Figure 11
51 Simulation Results of the First Approach of Control TheNeural Control for Improvement of a Classic Controller Propor-tional Derivative Each line that appears in both diagrams ofFigure 8 represents one variation of weight value119882119894119895 in theupdate of weight matrix 119882 or 119872119895119896 in the update of weightmatrix119872
Analysis Results The analysis of the simulation results of theresponse system equipped with NN controller for improve-ment of a classic controller PD seen in Figure 6 shows thatthis control law can satisfy the stability of the system despitethe presence of disturbances and robotic uncertainties
However due to disturbances and robotic uncertaintiesthe peak in the torques response makes this control strategynot reliable Figure 7
52 Simulation Results of the Second Approach of Control TheNeural Control via a High-Order Dynamic Neural NetworkFor this approach the initialization of neural networkweightsis not arbitrary and a training phase is necessary
(i)TheLearning Step Each line that appears in both diagramsof Figure 11 represents one variation of weight value 119882119891119894119895
ISRN Robotics 7
0 1 2 3 4 5 6 7 8 9 10
0
5
10
15
20
25
30
35
40
45
Time (s)
Join
t ang
le 1
(deg
)
First approach tracking performance joint 1
ActualDesired
minus5
(a)
ActualDesired
0 1 2 3 4 5 6 7 8 9 1020
25
30
35
40
45
Time (s)
Join
t ang
le 2
(deg
)
First approach tracking performance joint 2
(b)
Figure 6 Response of joint angles system equipped with NN controller for improvement of a classic controller PD
0 1 2 3 4 5 6 7 8 9 100
5
10
15
20
25
30
35
Time (s)
First approach torque 1 at joint 1
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
Time (s)
First approach torque 2 at joint 2
minus2
minus1
Torq
ue 2
(Nmiddotm
)
(b)
Figure 7 Response of torques in joint angles system equipped with NN controller for improvement of a classic controller PD
in the update of weight matrix119882119891 or119882119892119894119895 in the update ofweight matrix119882119892
Analysis Results The rigorous peak in the torques and jointangles response due to disturbances and robotic uncertain-ties seen in Figures 9 and 10 was able to be corrected thanksto the neural network adaptation
At the end of this learning step the best weight valuesseen in Figure 11 are obtained
(ii)Final SimulationResultsThe best weights values obtainedin the end of the learning step are used as initial values for thisstep
Each line that appears in both diagrams of Figure 14represents one variation of weight value 119882119891119894119895 in the update
of weight matrix119882119891 or119882119892119894119895 in the update of weight matrix119882119892
Analysis Results The analysis of the simulation results of theresponse system equipped with NN controller via a high-order dynamic neural network seen in Figure 12 shows thatthis control law can satisfy the stability of the system despitethe presence of disturbances and robotic uncertainties
The learning step is necessary to obtain the best weightvalues seen in Figure 11 which represent the true initialweight values
After the learning step and despite the presence of dis-turbances and robotic uncertainties no malfunction wasidentified in the torques response seen in Figure 13 (as thepeak in the torques response seen in Figure 7)
8 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 100
1
2
3
4
5
6
Time (s)
Wei
ghts119882119894119895
First approach update of weight119882
(a)
0 1 2 3 4 5 6 7 8 9 10
0
02
04
06
08
1
12
14
Time (s)
minus04
minus02
Wei
ghts119872119895119896
First approach update of weight119872
(b)
Figure 8 Weights update of the NN controller for improvement of a classic controller PD
0 1 2 3 4 5 6 7 8 9 10
0
20
40
60
80
100
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
minus40
minus20
(a)
0 1 2 3 4 5 6 7 8 9 100
20
40
60
80
100
120
140
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
ActualDesired
(b)
Figure 9 Response of joint angles system equipped with NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 10
0
200
400
600
800
Time (s)
Second approach torque 1 at joint 1
minus1000
minus800
minus600
minus400
minus200
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
200
400
600
800
1000
Time (s)
Second approach torque 2 at joint 2
minus400
minus200
Torq
ue 2
(Nmiddotm
)
(b)
Figure 10 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network (learning step)
ISRN Robotics 9
0 1 2 3 4 5 6 7 8 9 10
002040608
11214
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
Time (s) Time (s)Best weights values
minus06minus04
minus02
minus1
Wei
ghts119882119891119894119895
Wei
ghts119882119892119894119895
Second approach update of weights119882119891 Second approach update of weights 119882119892
Figure 11 Weights update of the NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 1010
15
20
25
30
35
40
45
50
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
(a)
ActualDesired
0 1 2 3 4 5 6 7 8 9 1020
25
30
35
40
45
50
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
(b)
Figure 12 Response of joint angles system equipped with NN controller via a high-order dynamic neural network
In Figure 14 it is easy to see that weights have reallyachieved their best values in the learning step and they remainnearly constant
53 Comparative Study The advantages and limitations ofeach approach of neural network control are presented inTable 2
The run-time performance of each approach is presentedin Table 3
6 Discussion and Conclusion
In this paper two approaches of neural network robot controlwere selected exposed and compared The aim of this
comparative study is to find the performance differencesbetween static and dynamic neural networks in roboticsystems control So one of these two approaches uses a staticneural network the other uses a dynamic neural networkThefirst approach is a direct neural control for improvement of aclassic controller proportional derivative (PD) proposed byLewis [1] it employs a static neural network to approximatethe nonlinearities and uncertainties in the robot dynamicsso that the static neural network is used to compensate thenonlinearities and uncertainties therefore it overcomes somelimitation of the conventional controller PD and improvesits accuracy The second approach is an indirect neuralcontrol via a high-order dynamic neural network proposedby Sanchez et al [7] it employs the dynamic neural networkfor an exact online identification of the robot state and then
10 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 10
050
100150200
Time (s)
Second approach torque 1 at joint 1
minus300minus250minus200minus150minus100minus50
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
50
100
150
Time (s)
Second approach torque 2 at joint 2
minus150
minus100
minus50Torq
ue 2
(Nmiddotm
)
(b)
Figure 13 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
02
04
06
08
1
12
14
minus06minus04
minus02
Wei
ghts119882119891119894119895
Second approach update of weight119882119891
(a)
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
1
2
3
4
5
6
7W
eigh
ts119882119892119894119895
Second approach update of weight119882119892
(b)
Figure 14 Weights update of the NN controller via a high-order dynamic neural network
Table 2 Advantages and limitations of the neural network control approaches
Control strategy Advantages Limitations
NN controller forimprovement of aclassic controller PD
(i) Initialization of the NN weights is arbitrary
No reliable response of the system seen in Figure 7due to the peak in the torques responsefacesdisturbances and robotic uncertainties
(ii) Online learning of the NN(iii) No offline training requirements(iv) Approximation by the neural network of thefunction which gathers the nonlinearities anduncertainties included in the robot dynamics(v) Overcoming some limitation of the conventionalcontroller PD(vi) Guaranteed stability in presence of nonlinearitiesand uncertainties
NN controller via ahigh order dynamicneural network
(i) An exact online identification of the robot statethanks to the dynamic neural network
(i) Initialization of the NN weights is not arbitrary(ii) Offline training requirements to find the suitableinitial NN weights values(ii) Guaranteed stability in presence of nonlinearities
and uncertainties(iii) Reliable response of the system faces disturbancesand robotic uncertainties (Figures 12 and 13)
ISRN Robotics 11
Table 3 Run-time performance of the neural network controlapproaches (on a time-domain 119905 = [0ndash10] sec)
Control strategy Elapsed time(in seconds)
NN controller for improvement of a classiccontroller PD 45262542
NN controller via a high-order dynamic neuralnetwork 44088512
it synthesizes the control law from this information recoveredby this identification
Simulation results under the framework MATLAB ofa two-link robot in 2D environment showed the perfor-mance differences between the two neural network controlapproaches studied Compared to the control with the staticneural network the neural control via dynamic neuralnetwork has significantly better tracking performance hasfaster response time and is more reliable to face disturbancesand robotic uncertainties However the indirect approachrequires offline training to find the suitable initial neuralnetwork weights values contrarily to the direct one in whichthe initialization of the neural network weights is arbitrary
Although it is clear that further experimentation needsto take place simulation results presented here indicate thatdynamic neural networks have demonstrated a very goodpotential for applications in closed loop control of robotmanipulators
References
[1] F L Lewis ldquoNeural network control of robot manipulatorsrdquoIEEE Expert vol 11 no 3 pp 64ndash75 1996
[2] W Zhang N Qi andH Yin ldquoPD control of robotmanipulatorswith uncertainties based on neural networkrdquo in Proceedings ofthe International Conference on Intelligent ComputationTechnol-ogy and Automation (ICICTA rsquo10) pp 884ndash888 May 2010
[3] Y H Kim F L Lewis and D M Dawson ldquoIntelligent optimalcontrol of robotic manipulators using neural networksrdquo Auto-matica vol 36 no 9 pp 1355ndash1364 2000
[4] Z Tang M Yang and Z Pei ldquoSelf-Adaptive PID controlstrategy based on RBF neural network for robot manipulatorrdquoin Proceedings of the 1st International Conference on PervasiveComputing Signal Processing and Applications (PCSPA rsquo10) pp932ndash935 September 2010
[5] D Popescu D Selisteanu and L Popescu ldquoNeural and adaptivecontrol of a rigid link manipulatorrdquo WSEAS Transactions onSystems vol 7 no 6 pp 632ndash641 2008
[6] M A Brdys and G J Kulawski ldquoStable adaptive control withrecurrent networksrdquo Automatica vol 36 no 1 pp 5ndash22 2000
[7] E N Sanchez L J Ricalde R Langari and D ShahmirzadildquoRollover control in heavy vehicles via recurrent high orderneural networksrdquo in Recurrent Neural Networks X Hu and PBalasubramaniam Eds Vienna Austria 2008
[8] F G Rossomando C Soria D Patino and R Carelli ldquoModelreference adaptive control for mobile robots in trajectorytracking using radial basis function neural networksrdquo LatinAmerican Applied Research vol 41 no 2 2010
[9] Z Pei Y Zhang and Z Tang ldquoModel reference adaptivePID control of hydraulic parallel robot based on RBF neural
networkrdquo in Proceedings of the IEEE International Conference onRobotics and Biomimetics (ROBIO rsquo07) pp 1383ndash1387 Decem-ber 2007
[10] M G Zhang andW H Li ldquoSingle neuron PIDmodel referenceadaptive control based on RBF neural networkrdquo in Proceedingsof the International Conference onMachine Learning and Cyber-netics pp 3021ndash3025 August 2006
[11] H X Li and H Deng ldquoAn approximate internal model-basedneural control for unknown nonlinear discrete processesrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 659ndash6702006
[12] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[13] C Kambhampati R J Craddock M Tham and K WarwickldquoInverse model control using recurrent networksrdquoMathematicsand Computers in Simulation vol 51 no 3-4 pp 181ndash199 2000
[14] MWang J Yu M Tan and Q Yang ldquoBack-propagation neuralnetwork based predictive control for biomimetic robotic fishrdquoin Proceedings of the 27th Chinese Control Conference (CCC rsquo08)pp 430ndash434 July 2008
[15] K Kara T E Missoum K E Hemsas and M L HadjilildquoControl of a robotic manipulator using neural network basedpredictive controlrdquo in Proceedings of the 17th IEEE InternationalConference on Electronics Circuits and Systems (ICECS rsquo10) pp1104ndash1107 December 2010
[16] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[17] E B Kosmatopoulos M A Christodoulou and P A IoannouldquoDynamical neural networks that ensure exponential identifi-cation error convergencerdquo Neural Networks vol 10 no 2 pp299ndash314 1997
[18] E N Sanchez J P Perez and L Ricalde ldquoRecurrent neuralcontrol for robot trajectory trackingrdquo in Proceedings of the 15thWorld Congress International Federation of Automatic ControlBarcelona Spain July 2002
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
6 ISRN Robotics
The robot model parameters are shown in Table 1The simulations of the variation of positions and torques
exerted at each of the two joints as well as the weights of theneural network were carried out over a period of 10 seconds
The initial conditions are selected as follows
120579 (0) = [20 20]119879 (0) = [0 0]
119879 (42)
The reference signal is
120579119889 (119905) = [45 45]119879 119889 (119905) = [0 0]
119879 (43)
The external disturbances are[01 minus01] if 119905 le 1sec
[minus02 02] if 119905 gt 1sec(44)
A variation in the coefficients of viscous friction by anerror of 10 at time 119905 = 1 sec
A variation in the masses of the two bodies of the arm byan error of 10 at time 119905 = 2 sec
Theneural network controller parameters are selected foreach of the two approaches as follows
(i) For the first approach neural control for improve-ment of a classic controller proportional-derivative(PD) we have the following
After several simulation tests we have found suitablevalues for the initialization of the weights of the neural net-work and the various parameters as follows
Kv = 10 times 1198682 times 2 Kz = 01
ZB = 5 F = 10 times 1198686 times 6
G = 10 times 11986810 times 10 K = 01
(45)
The number of neurons in the hidden layer of neural networkisN = 6The activation function sigmoid is120590(119909) = 1(1+119890minus119909)and its Jacobian is 1205901015840(119909) = diag120590(119909) times [119868 minus diag120590(119909)]No initial neural network training phase was needed Theneural network weights were arbitrarily initialized at zero inthis simulation
(ii) For the second approach neural control via a high-order dynamic neural network we have the following
Initial state vector of the neural network is 119883119873 = [0 sdot sdot sdot0]119879 and the number of high-order connections is L = 13The activation function of the subnet RN1 is the hyper-
bolic tangent 120593(119883119873) = (1198902119883119873 minus 1)(119890
2119883119873 + 1)Let us have119883119873 = [1199091198731 sdot sdot sdot 1199091198734]
119879
1198731 = tangh (1199091198731) 1198732 = tangh (1199091198732)
1198733 = tangh (1199091198733) 1198734 = tangh (1199091198734) (46)
So that we define the high-order connections of the neuralnetworks119885 (119883119873) = [11987311198732119873311987341198731 times 11987321198731 times 11987331198731 times 1198734
1198732 times 11987331198732 times 11987341198733 times 11987341198731 times 1198732 times 1198733
1198731 times 1198732 times 11987341198732 times 1198733 times 1198734]119879
(47)
Table 1 Parameters of 2-link rigid robot model
Parameters Designation Value UnitLength of link 1 1198971 025 mLength of link 2 1198972 016 mMass of link 1 1198981 95 KgMass of link 2 1198982 5 KgPosition of center of gravity of link 1 1198961 0125 mPosition of center of gravity of link 2 1198962 008 mInertia of link 1 1198681 00043 Kgsdotm2
Inertia of link 2 1198682 00061 Kgsdotm2
Gravitational acceleration 119892 98 NsdotKgminus1
The activation function of the subnet RN2 is the linearfunction 120601(120591(119905)) = 120591(119905)
For this approach the initialization of neural networkweights is not arbitrary and a training phase is necessary
After several simulation tests we have found suitablevalues for the initialization of the weights of the neuralnetwork and the various parameters as follows
Γ = 001 times 11986813 times 13 Γg = 00001 times 1198682 times 2
120582i=300 AN = minus300 1198684 times 4
L120601Z = 002 120583 = 1000
(48)
The suitable values for the initialization of the weights areshown in Figure 11
51 Simulation Results of the First Approach of Control TheNeural Control for Improvement of a Classic Controller Propor-tional Derivative Each line that appears in both diagrams ofFigure 8 represents one variation of weight value119882119894119895 in theupdate of weight matrix 119882 or 119872119895119896 in the update of weightmatrix119872
Analysis Results The analysis of the simulation results of theresponse system equipped with NN controller for improve-ment of a classic controller PD seen in Figure 6 shows thatthis control law can satisfy the stability of the system despitethe presence of disturbances and robotic uncertainties
However due to disturbances and robotic uncertaintiesthe peak in the torques response makes this control strategynot reliable Figure 7
52 Simulation Results of the Second Approach of Control TheNeural Control via a High-Order Dynamic Neural NetworkFor this approach the initialization of neural networkweightsis not arbitrary and a training phase is necessary
(i)TheLearning Step Each line that appears in both diagramsof Figure 11 represents one variation of weight value 119882119891119894119895
ISRN Robotics 7
0 1 2 3 4 5 6 7 8 9 10
0
5
10
15
20
25
30
35
40
45
Time (s)
Join
t ang
le 1
(deg
)
First approach tracking performance joint 1
ActualDesired
minus5
(a)
ActualDesired
0 1 2 3 4 5 6 7 8 9 1020
25
30
35
40
45
Time (s)
Join
t ang
le 2
(deg
)
First approach tracking performance joint 2
(b)
Figure 6 Response of joint angles system equipped with NN controller for improvement of a classic controller PD
0 1 2 3 4 5 6 7 8 9 100
5
10
15
20
25
30
35
Time (s)
First approach torque 1 at joint 1
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
Time (s)
First approach torque 2 at joint 2
minus2
minus1
Torq
ue 2
(Nmiddotm
)
(b)
Figure 7 Response of torques in joint angles system equipped with NN controller for improvement of a classic controller PD
in the update of weight matrix119882119891 or119882119892119894119895 in the update ofweight matrix119882119892
Analysis Results The rigorous peak in the torques and jointangles response due to disturbances and robotic uncertain-ties seen in Figures 9 and 10 was able to be corrected thanksto the neural network adaptation
At the end of this learning step the best weight valuesseen in Figure 11 are obtained
(ii)Final SimulationResultsThe best weights values obtainedin the end of the learning step are used as initial values for thisstep
Each line that appears in both diagrams of Figure 14represents one variation of weight value 119882119891119894119895 in the update
of weight matrix119882119891 or119882119892119894119895 in the update of weight matrix119882119892
Analysis Results The analysis of the simulation results of theresponse system equipped with NN controller via a high-order dynamic neural network seen in Figure 12 shows thatthis control law can satisfy the stability of the system despitethe presence of disturbances and robotic uncertainties
The learning step is necessary to obtain the best weightvalues seen in Figure 11 which represent the true initialweight values
After the learning step and despite the presence of dis-turbances and robotic uncertainties no malfunction wasidentified in the torques response seen in Figure 13 (as thepeak in the torques response seen in Figure 7)
8 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 100
1
2
3
4
5
6
Time (s)
Wei
ghts119882119894119895
First approach update of weight119882
(a)
0 1 2 3 4 5 6 7 8 9 10
0
02
04
06
08
1
12
14
Time (s)
minus04
minus02
Wei
ghts119872119895119896
First approach update of weight119872
(b)
Figure 8 Weights update of the NN controller for improvement of a classic controller PD
0 1 2 3 4 5 6 7 8 9 10
0
20
40
60
80
100
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
minus40
minus20
(a)
0 1 2 3 4 5 6 7 8 9 100
20
40
60
80
100
120
140
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
ActualDesired
(b)
Figure 9 Response of joint angles system equipped with NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 10
0
200
400
600
800
Time (s)
Second approach torque 1 at joint 1
minus1000
minus800
minus600
minus400
minus200
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
200
400
600
800
1000
Time (s)
Second approach torque 2 at joint 2
minus400
minus200
Torq
ue 2
(Nmiddotm
)
(b)
Figure 10 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network (learning step)
ISRN Robotics 9
0 1 2 3 4 5 6 7 8 9 10
002040608
11214
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
Time (s) Time (s)Best weights values
minus06minus04
minus02
minus1
Wei
ghts119882119891119894119895
Wei
ghts119882119892119894119895
Second approach update of weights119882119891 Second approach update of weights 119882119892
Figure 11 Weights update of the NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 1010
15
20
25
30
35
40
45
50
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
(a)
ActualDesired
0 1 2 3 4 5 6 7 8 9 1020
25
30
35
40
45
50
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
(b)
Figure 12 Response of joint angles system equipped with NN controller via a high-order dynamic neural network
In Figure 14 it is easy to see that weights have reallyachieved their best values in the learning step and they remainnearly constant
53 Comparative Study The advantages and limitations ofeach approach of neural network control are presented inTable 2
The run-time performance of each approach is presentedin Table 3
6 Discussion and Conclusion
In this paper two approaches of neural network robot controlwere selected exposed and compared The aim of this
comparative study is to find the performance differencesbetween static and dynamic neural networks in roboticsystems control So one of these two approaches uses a staticneural network the other uses a dynamic neural networkThefirst approach is a direct neural control for improvement of aclassic controller proportional derivative (PD) proposed byLewis [1] it employs a static neural network to approximatethe nonlinearities and uncertainties in the robot dynamicsso that the static neural network is used to compensate thenonlinearities and uncertainties therefore it overcomes somelimitation of the conventional controller PD and improvesits accuracy The second approach is an indirect neuralcontrol via a high-order dynamic neural network proposedby Sanchez et al [7] it employs the dynamic neural networkfor an exact online identification of the robot state and then
10 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 10
050
100150200
Time (s)
Second approach torque 1 at joint 1
minus300minus250minus200minus150minus100minus50
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
50
100
150
Time (s)
Second approach torque 2 at joint 2
minus150
minus100
minus50Torq
ue 2
(Nmiddotm
)
(b)
Figure 13 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
02
04
06
08
1
12
14
minus06minus04
minus02
Wei
ghts119882119891119894119895
Second approach update of weight119882119891
(a)
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
1
2
3
4
5
6
7W
eigh
ts119882119892119894119895
Second approach update of weight119882119892
(b)
Figure 14 Weights update of the NN controller via a high-order dynamic neural network
Table 2 Advantages and limitations of the neural network control approaches
Control strategy Advantages Limitations
NN controller forimprovement of aclassic controller PD
(i) Initialization of the NN weights is arbitrary
No reliable response of the system seen in Figure 7due to the peak in the torques responsefacesdisturbances and robotic uncertainties
(ii) Online learning of the NN(iii) No offline training requirements(iv) Approximation by the neural network of thefunction which gathers the nonlinearities anduncertainties included in the robot dynamics(v) Overcoming some limitation of the conventionalcontroller PD(vi) Guaranteed stability in presence of nonlinearitiesand uncertainties
NN controller via ahigh order dynamicneural network
(i) An exact online identification of the robot statethanks to the dynamic neural network
(i) Initialization of the NN weights is not arbitrary(ii) Offline training requirements to find the suitableinitial NN weights values(ii) Guaranteed stability in presence of nonlinearities
and uncertainties(iii) Reliable response of the system faces disturbancesand robotic uncertainties (Figures 12 and 13)
ISRN Robotics 11
Table 3 Run-time performance of the neural network controlapproaches (on a time-domain 119905 = [0ndash10] sec)
Control strategy Elapsed time(in seconds)
NN controller for improvement of a classiccontroller PD 45262542
NN controller via a high-order dynamic neuralnetwork 44088512
it synthesizes the control law from this information recoveredby this identification
Simulation results under the framework MATLAB ofa two-link robot in 2D environment showed the perfor-mance differences between the two neural network controlapproaches studied Compared to the control with the staticneural network the neural control via dynamic neuralnetwork has significantly better tracking performance hasfaster response time and is more reliable to face disturbancesand robotic uncertainties However the indirect approachrequires offline training to find the suitable initial neuralnetwork weights values contrarily to the direct one in whichthe initialization of the neural network weights is arbitrary
Although it is clear that further experimentation needsto take place simulation results presented here indicate thatdynamic neural networks have demonstrated a very goodpotential for applications in closed loop control of robotmanipulators
References
[1] F L Lewis ldquoNeural network control of robot manipulatorsrdquoIEEE Expert vol 11 no 3 pp 64ndash75 1996
[2] W Zhang N Qi andH Yin ldquoPD control of robotmanipulatorswith uncertainties based on neural networkrdquo in Proceedings ofthe International Conference on Intelligent ComputationTechnol-ogy and Automation (ICICTA rsquo10) pp 884ndash888 May 2010
[3] Y H Kim F L Lewis and D M Dawson ldquoIntelligent optimalcontrol of robotic manipulators using neural networksrdquo Auto-matica vol 36 no 9 pp 1355ndash1364 2000
[4] Z Tang M Yang and Z Pei ldquoSelf-Adaptive PID controlstrategy based on RBF neural network for robot manipulatorrdquoin Proceedings of the 1st International Conference on PervasiveComputing Signal Processing and Applications (PCSPA rsquo10) pp932ndash935 September 2010
[5] D Popescu D Selisteanu and L Popescu ldquoNeural and adaptivecontrol of a rigid link manipulatorrdquo WSEAS Transactions onSystems vol 7 no 6 pp 632ndash641 2008
[6] M A Brdys and G J Kulawski ldquoStable adaptive control withrecurrent networksrdquo Automatica vol 36 no 1 pp 5ndash22 2000
[7] E N Sanchez L J Ricalde R Langari and D ShahmirzadildquoRollover control in heavy vehicles via recurrent high orderneural networksrdquo in Recurrent Neural Networks X Hu and PBalasubramaniam Eds Vienna Austria 2008
[8] F G Rossomando C Soria D Patino and R Carelli ldquoModelreference adaptive control for mobile robots in trajectorytracking using radial basis function neural networksrdquo LatinAmerican Applied Research vol 41 no 2 2010
[9] Z Pei Y Zhang and Z Tang ldquoModel reference adaptivePID control of hydraulic parallel robot based on RBF neural
networkrdquo in Proceedings of the IEEE International Conference onRobotics and Biomimetics (ROBIO rsquo07) pp 1383ndash1387 Decem-ber 2007
[10] M G Zhang andW H Li ldquoSingle neuron PIDmodel referenceadaptive control based on RBF neural networkrdquo in Proceedingsof the International Conference onMachine Learning and Cyber-netics pp 3021ndash3025 August 2006
[11] H X Li and H Deng ldquoAn approximate internal model-basedneural control for unknown nonlinear discrete processesrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 659ndash6702006
[12] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[13] C Kambhampati R J Craddock M Tham and K WarwickldquoInverse model control using recurrent networksrdquoMathematicsand Computers in Simulation vol 51 no 3-4 pp 181ndash199 2000
[14] MWang J Yu M Tan and Q Yang ldquoBack-propagation neuralnetwork based predictive control for biomimetic robotic fishrdquoin Proceedings of the 27th Chinese Control Conference (CCC rsquo08)pp 430ndash434 July 2008
[15] K Kara T E Missoum K E Hemsas and M L HadjilildquoControl of a robotic manipulator using neural network basedpredictive controlrdquo in Proceedings of the 17th IEEE InternationalConference on Electronics Circuits and Systems (ICECS rsquo10) pp1104ndash1107 December 2010
[16] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[17] E B Kosmatopoulos M A Christodoulou and P A IoannouldquoDynamical neural networks that ensure exponential identifi-cation error convergencerdquo Neural Networks vol 10 no 2 pp299ndash314 1997
[18] E N Sanchez J P Perez and L Ricalde ldquoRecurrent neuralcontrol for robot trajectory trackingrdquo in Proceedings of the 15thWorld Congress International Federation of Automatic ControlBarcelona Spain July 2002
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
ISRN Robotics 7
0 1 2 3 4 5 6 7 8 9 10
0
5
10
15
20
25
30
35
40
45
Time (s)
Join
t ang
le 1
(deg
)
First approach tracking performance joint 1
ActualDesired
minus5
(a)
ActualDesired
0 1 2 3 4 5 6 7 8 9 1020
25
30
35
40
45
Time (s)
Join
t ang
le 2
(deg
)
First approach tracking performance joint 2
(b)
Figure 6 Response of joint angles system equipped with NN controller for improvement of a classic controller PD
0 1 2 3 4 5 6 7 8 9 100
5
10
15
20
25
30
35
Time (s)
First approach torque 1 at joint 1
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
Time (s)
First approach torque 2 at joint 2
minus2
minus1
Torq
ue 2
(Nmiddotm
)
(b)
Figure 7 Response of torques in joint angles system equipped with NN controller for improvement of a classic controller PD
in the update of weight matrix119882119891 or119882119892119894119895 in the update ofweight matrix119882119892
Analysis Results The rigorous peak in the torques and jointangles response due to disturbances and robotic uncertain-ties seen in Figures 9 and 10 was able to be corrected thanksto the neural network adaptation
At the end of this learning step the best weight valuesseen in Figure 11 are obtained
(ii)Final SimulationResultsThe best weights values obtainedin the end of the learning step are used as initial values for thisstep
Each line that appears in both diagrams of Figure 14represents one variation of weight value 119882119891119894119895 in the update
of weight matrix119882119891 or119882119892119894119895 in the update of weight matrix119882119892
Analysis Results The analysis of the simulation results of theresponse system equipped with NN controller via a high-order dynamic neural network seen in Figure 12 shows thatthis control law can satisfy the stability of the system despitethe presence of disturbances and robotic uncertainties
The learning step is necessary to obtain the best weightvalues seen in Figure 11 which represent the true initialweight values
After the learning step and despite the presence of dis-turbances and robotic uncertainties no malfunction wasidentified in the torques response seen in Figure 13 (as thepeak in the torques response seen in Figure 7)
8 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 100
1
2
3
4
5
6
Time (s)
Wei
ghts119882119894119895
First approach update of weight119882
(a)
0 1 2 3 4 5 6 7 8 9 10
0
02
04
06
08
1
12
14
Time (s)
minus04
minus02
Wei
ghts119872119895119896
First approach update of weight119872
(b)
Figure 8 Weights update of the NN controller for improvement of a classic controller PD
0 1 2 3 4 5 6 7 8 9 10
0
20
40
60
80
100
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
minus40
minus20
(a)
0 1 2 3 4 5 6 7 8 9 100
20
40
60
80
100
120
140
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
ActualDesired
(b)
Figure 9 Response of joint angles system equipped with NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 10
0
200
400
600
800
Time (s)
Second approach torque 1 at joint 1
minus1000
minus800
minus600
minus400
minus200
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
200
400
600
800
1000
Time (s)
Second approach torque 2 at joint 2
minus400
minus200
Torq
ue 2
(Nmiddotm
)
(b)
Figure 10 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network (learning step)
ISRN Robotics 9
0 1 2 3 4 5 6 7 8 9 10
002040608
11214
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
Time (s) Time (s)Best weights values
minus06minus04
minus02
minus1
Wei
ghts119882119891119894119895
Wei
ghts119882119892119894119895
Second approach update of weights119882119891 Second approach update of weights 119882119892
Figure 11 Weights update of the NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 1010
15
20
25
30
35
40
45
50
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
(a)
ActualDesired
0 1 2 3 4 5 6 7 8 9 1020
25
30
35
40
45
50
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
(b)
Figure 12 Response of joint angles system equipped with NN controller via a high-order dynamic neural network
In Figure 14 it is easy to see that weights have reallyachieved their best values in the learning step and they remainnearly constant
53 Comparative Study The advantages and limitations ofeach approach of neural network control are presented inTable 2
The run-time performance of each approach is presentedin Table 3
6 Discussion and Conclusion
In this paper two approaches of neural network robot controlwere selected exposed and compared The aim of this
comparative study is to find the performance differencesbetween static and dynamic neural networks in roboticsystems control So one of these two approaches uses a staticneural network the other uses a dynamic neural networkThefirst approach is a direct neural control for improvement of aclassic controller proportional derivative (PD) proposed byLewis [1] it employs a static neural network to approximatethe nonlinearities and uncertainties in the robot dynamicsso that the static neural network is used to compensate thenonlinearities and uncertainties therefore it overcomes somelimitation of the conventional controller PD and improvesits accuracy The second approach is an indirect neuralcontrol via a high-order dynamic neural network proposedby Sanchez et al [7] it employs the dynamic neural networkfor an exact online identification of the robot state and then
10 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 10
050
100150200
Time (s)
Second approach torque 1 at joint 1
minus300minus250minus200minus150minus100minus50
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
50
100
150
Time (s)
Second approach torque 2 at joint 2
minus150
minus100
minus50Torq
ue 2
(Nmiddotm
)
(b)
Figure 13 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
02
04
06
08
1
12
14
minus06minus04
minus02
Wei
ghts119882119891119894119895
Second approach update of weight119882119891
(a)
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
1
2
3
4
5
6
7W
eigh
ts119882119892119894119895
Second approach update of weight119882119892
(b)
Figure 14 Weights update of the NN controller via a high-order dynamic neural network
Table 2 Advantages and limitations of the neural network control approaches
Control strategy Advantages Limitations
NN controller forimprovement of aclassic controller PD
(i) Initialization of the NN weights is arbitrary
No reliable response of the system seen in Figure 7due to the peak in the torques responsefacesdisturbances and robotic uncertainties
(ii) Online learning of the NN(iii) No offline training requirements(iv) Approximation by the neural network of thefunction which gathers the nonlinearities anduncertainties included in the robot dynamics(v) Overcoming some limitation of the conventionalcontroller PD(vi) Guaranteed stability in presence of nonlinearitiesand uncertainties
NN controller via ahigh order dynamicneural network
(i) An exact online identification of the robot statethanks to the dynamic neural network
(i) Initialization of the NN weights is not arbitrary(ii) Offline training requirements to find the suitableinitial NN weights values(ii) Guaranteed stability in presence of nonlinearities
and uncertainties(iii) Reliable response of the system faces disturbancesand robotic uncertainties (Figures 12 and 13)
ISRN Robotics 11
Table 3 Run-time performance of the neural network controlapproaches (on a time-domain 119905 = [0ndash10] sec)
Control strategy Elapsed time(in seconds)
NN controller for improvement of a classiccontroller PD 45262542
NN controller via a high-order dynamic neuralnetwork 44088512
it synthesizes the control law from this information recoveredby this identification
Simulation results under the framework MATLAB ofa two-link robot in 2D environment showed the perfor-mance differences between the two neural network controlapproaches studied Compared to the control with the staticneural network the neural control via dynamic neuralnetwork has significantly better tracking performance hasfaster response time and is more reliable to face disturbancesand robotic uncertainties However the indirect approachrequires offline training to find the suitable initial neuralnetwork weights values contrarily to the direct one in whichthe initialization of the neural network weights is arbitrary
Although it is clear that further experimentation needsto take place simulation results presented here indicate thatdynamic neural networks have demonstrated a very goodpotential for applications in closed loop control of robotmanipulators
References
[1] F L Lewis ldquoNeural network control of robot manipulatorsrdquoIEEE Expert vol 11 no 3 pp 64ndash75 1996
[2] W Zhang N Qi andH Yin ldquoPD control of robotmanipulatorswith uncertainties based on neural networkrdquo in Proceedings ofthe International Conference on Intelligent ComputationTechnol-ogy and Automation (ICICTA rsquo10) pp 884ndash888 May 2010
[3] Y H Kim F L Lewis and D M Dawson ldquoIntelligent optimalcontrol of robotic manipulators using neural networksrdquo Auto-matica vol 36 no 9 pp 1355ndash1364 2000
[4] Z Tang M Yang and Z Pei ldquoSelf-Adaptive PID controlstrategy based on RBF neural network for robot manipulatorrdquoin Proceedings of the 1st International Conference on PervasiveComputing Signal Processing and Applications (PCSPA rsquo10) pp932ndash935 September 2010
[5] D Popescu D Selisteanu and L Popescu ldquoNeural and adaptivecontrol of a rigid link manipulatorrdquo WSEAS Transactions onSystems vol 7 no 6 pp 632ndash641 2008
[6] M A Brdys and G J Kulawski ldquoStable adaptive control withrecurrent networksrdquo Automatica vol 36 no 1 pp 5ndash22 2000
[7] E N Sanchez L J Ricalde R Langari and D ShahmirzadildquoRollover control in heavy vehicles via recurrent high orderneural networksrdquo in Recurrent Neural Networks X Hu and PBalasubramaniam Eds Vienna Austria 2008
[8] F G Rossomando C Soria D Patino and R Carelli ldquoModelreference adaptive control for mobile robots in trajectorytracking using radial basis function neural networksrdquo LatinAmerican Applied Research vol 41 no 2 2010
[9] Z Pei Y Zhang and Z Tang ldquoModel reference adaptivePID control of hydraulic parallel robot based on RBF neural
networkrdquo in Proceedings of the IEEE International Conference onRobotics and Biomimetics (ROBIO rsquo07) pp 1383ndash1387 Decem-ber 2007
[10] M G Zhang andW H Li ldquoSingle neuron PIDmodel referenceadaptive control based on RBF neural networkrdquo in Proceedingsof the International Conference onMachine Learning and Cyber-netics pp 3021ndash3025 August 2006
[11] H X Li and H Deng ldquoAn approximate internal model-basedneural control for unknown nonlinear discrete processesrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 659ndash6702006
[12] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[13] C Kambhampati R J Craddock M Tham and K WarwickldquoInverse model control using recurrent networksrdquoMathematicsand Computers in Simulation vol 51 no 3-4 pp 181ndash199 2000
[14] MWang J Yu M Tan and Q Yang ldquoBack-propagation neuralnetwork based predictive control for biomimetic robotic fishrdquoin Proceedings of the 27th Chinese Control Conference (CCC rsquo08)pp 430ndash434 July 2008
[15] K Kara T E Missoum K E Hemsas and M L HadjilildquoControl of a robotic manipulator using neural network basedpredictive controlrdquo in Proceedings of the 17th IEEE InternationalConference on Electronics Circuits and Systems (ICECS rsquo10) pp1104ndash1107 December 2010
[16] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[17] E B Kosmatopoulos M A Christodoulou and P A IoannouldquoDynamical neural networks that ensure exponential identifi-cation error convergencerdquo Neural Networks vol 10 no 2 pp299ndash314 1997
[18] E N Sanchez J P Perez and L Ricalde ldquoRecurrent neuralcontrol for robot trajectory trackingrdquo in Proceedings of the 15thWorld Congress International Federation of Automatic ControlBarcelona Spain July 2002
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
8 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 100
1
2
3
4
5
6
Time (s)
Wei
ghts119882119894119895
First approach update of weight119882
(a)
0 1 2 3 4 5 6 7 8 9 10
0
02
04
06
08
1
12
14
Time (s)
minus04
minus02
Wei
ghts119872119895119896
First approach update of weight119872
(b)
Figure 8 Weights update of the NN controller for improvement of a classic controller PD
0 1 2 3 4 5 6 7 8 9 10
0
20
40
60
80
100
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
minus40
minus20
(a)
0 1 2 3 4 5 6 7 8 9 100
20
40
60
80
100
120
140
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
ActualDesired
(b)
Figure 9 Response of joint angles system equipped with NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 10
0
200
400
600
800
Time (s)
Second approach torque 1 at joint 1
minus1000
minus800
minus600
minus400
minus200
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
200
400
600
800
1000
Time (s)
Second approach torque 2 at joint 2
minus400
minus200
Torq
ue 2
(Nmiddotm
)
(b)
Figure 10 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network (learning step)
ISRN Robotics 9
0 1 2 3 4 5 6 7 8 9 10
002040608
11214
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
Time (s) Time (s)Best weights values
minus06minus04
minus02
minus1
Wei
ghts119882119891119894119895
Wei
ghts119882119892119894119895
Second approach update of weights119882119891 Second approach update of weights 119882119892
Figure 11 Weights update of the NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 1010
15
20
25
30
35
40
45
50
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
(a)
ActualDesired
0 1 2 3 4 5 6 7 8 9 1020
25
30
35
40
45
50
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
(b)
Figure 12 Response of joint angles system equipped with NN controller via a high-order dynamic neural network
In Figure 14 it is easy to see that weights have reallyachieved their best values in the learning step and they remainnearly constant
53 Comparative Study The advantages and limitations ofeach approach of neural network control are presented inTable 2
The run-time performance of each approach is presentedin Table 3
6 Discussion and Conclusion
In this paper two approaches of neural network robot controlwere selected exposed and compared The aim of this
comparative study is to find the performance differencesbetween static and dynamic neural networks in roboticsystems control So one of these two approaches uses a staticneural network the other uses a dynamic neural networkThefirst approach is a direct neural control for improvement of aclassic controller proportional derivative (PD) proposed byLewis [1] it employs a static neural network to approximatethe nonlinearities and uncertainties in the robot dynamicsso that the static neural network is used to compensate thenonlinearities and uncertainties therefore it overcomes somelimitation of the conventional controller PD and improvesits accuracy The second approach is an indirect neuralcontrol via a high-order dynamic neural network proposedby Sanchez et al [7] it employs the dynamic neural networkfor an exact online identification of the robot state and then
10 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 10
050
100150200
Time (s)
Second approach torque 1 at joint 1
minus300minus250minus200minus150minus100minus50
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
50
100
150
Time (s)
Second approach torque 2 at joint 2
minus150
minus100
minus50Torq
ue 2
(Nmiddotm
)
(b)
Figure 13 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
02
04
06
08
1
12
14
minus06minus04
minus02
Wei
ghts119882119891119894119895
Second approach update of weight119882119891
(a)
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
1
2
3
4
5
6
7W
eigh
ts119882119892119894119895
Second approach update of weight119882119892
(b)
Figure 14 Weights update of the NN controller via a high-order dynamic neural network
Table 2 Advantages and limitations of the neural network control approaches
Control strategy Advantages Limitations
NN controller forimprovement of aclassic controller PD
(i) Initialization of the NN weights is arbitrary
No reliable response of the system seen in Figure 7due to the peak in the torques responsefacesdisturbances and robotic uncertainties
(ii) Online learning of the NN(iii) No offline training requirements(iv) Approximation by the neural network of thefunction which gathers the nonlinearities anduncertainties included in the robot dynamics(v) Overcoming some limitation of the conventionalcontroller PD(vi) Guaranteed stability in presence of nonlinearitiesand uncertainties
NN controller via ahigh order dynamicneural network
(i) An exact online identification of the robot statethanks to the dynamic neural network
(i) Initialization of the NN weights is not arbitrary(ii) Offline training requirements to find the suitableinitial NN weights values(ii) Guaranteed stability in presence of nonlinearities
and uncertainties(iii) Reliable response of the system faces disturbancesand robotic uncertainties (Figures 12 and 13)
ISRN Robotics 11
Table 3 Run-time performance of the neural network controlapproaches (on a time-domain 119905 = [0ndash10] sec)
Control strategy Elapsed time(in seconds)
NN controller for improvement of a classiccontroller PD 45262542
NN controller via a high-order dynamic neuralnetwork 44088512
it synthesizes the control law from this information recoveredby this identification
Simulation results under the framework MATLAB ofa two-link robot in 2D environment showed the perfor-mance differences between the two neural network controlapproaches studied Compared to the control with the staticneural network the neural control via dynamic neuralnetwork has significantly better tracking performance hasfaster response time and is more reliable to face disturbancesand robotic uncertainties However the indirect approachrequires offline training to find the suitable initial neuralnetwork weights values contrarily to the direct one in whichthe initialization of the neural network weights is arbitrary
Although it is clear that further experimentation needsto take place simulation results presented here indicate thatdynamic neural networks have demonstrated a very goodpotential for applications in closed loop control of robotmanipulators
References
[1] F L Lewis ldquoNeural network control of robot manipulatorsrdquoIEEE Expert vol 11 no 3 pp 64ndash75 1996
[2] W Zhang N Qi andH Yin ldquoPD control of robotmanipulatorswith uncertainties based on neural networkrdquo in Proceedings ofthe International Conference on Intelligent ComputationTechnol-ogy and Automation (ICICTA rsquo10) pp 884ndash888 May 2010
[3] Y H Kim F L Lewis and D M Dawson ldquoIntelligent optimalcontrol of robotic manipulators using neural networksrdquo Auto-matica vol 36 no 9 pp 1355ndash1364 2000
[4] Z Tang M Yang and Z Pei ldquoSelf-Adaptive PID controlstrategy based on RBF neural network for robot manipulatorrdquoin Proceedings of the 1st International Conference on PervasiveComputing Signal Processing and Applications (PCSPA rsquo10) pp932ndash935 September 2010
[5] D Popescu D Selisteanu and L Popescu ldquoNeural and adaptivecontrol of a rigid link manipulatorrdquo WSEAS Transactions onSystems vol 7 no 6 pp 632ndash641 2008
[6] M A Brdys and G J Kulawski ldquoStable adaptive control withrecurrent networksrdquo Automatica vol 36 no 1 pp 5ndash22 2000
[7] E N Sanchez L J Ricalde R Langari and D ShahmirzadildquoRollover control in heavy vehicles via recurrent high orderneural networksrdquo in Recurrent Neural Networks X Hu and PBalasubramaniam Eds Vienna Austria 2008
[8] F G Rossomando C Soria D Patino and R Carelli ldquoModelreference adaptive control for mobile robots in trajectorytracking using radial basis function neural networksrdquo LatinAmerican Applied Research vol 41 no 2 2010
[9] Z Pei Y Zhang and Z Tang ldquoModel reference adaptivePID control of hydraulic parallel robot based on RBF neural
networkrdquo in Proceedings of the IEEE International Conference onRobotics and Biomimetics (ROBIO rsquo07) pp 1383ndash1387 Decem-ber 2007
[10] M G Zhang andW H Li ldquoSingle neuron PIDmodel referenceadaptive control based on RBF neural networkrdquo in Proceedingsof the International Conference onMachine Learning and Cyber-netics pp 3021ndash3025 August 2006
[11] H X Li and H Deng ldquoAn approximate internal model-basedneural control for unknown nonlinear discrete processesrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 659ndash6702006
[12] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[13] C Kambhampati R J Craddock M Tham and K WarwickldquoInverse model control using recurrent networksrdquoMathematicsand Computers in Simulation vol 51 no 3-4 pp 181ndash199 2000
[14] MWang J Yu M Tan and Q Yang ldquoBack-propagation neuralnetwork based predictive control for biomimetic robotic fishrdquoin Proceedings of the 27th Chinese Control Conference (CCC rsquo08)pp 430ndash434 July 2008
[15] K Kara T E Missoum K E Hemsas and M L HadjilildquoControl of a robotic manipulator using neural network basedpredictive controlrdquo in Proceedings of the 17th IEEE InternationalConference on Electronics Circuits and Systems (ICECS rsquo10) pp1104ndash1107 December 2010
[16] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[17] E B Kosmatopoulos M A Christodoulou and P A IoannouldquoDynamical neural networks that ensure exponential identifi-cation error convergencerdquo Neural Networks vol 10 no 2 pp299ndash314 1997
[18] E N Sanchez J P Perez and L Ricalde ldquoRecurrent neuralcontrol for robot trajectory trackingrdquo in Proceedings of the 15thWorld Congress International Federation of Automatic ControlBarcelona Spain July 2002
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
ISRN Robotics 9
0 1 2 3 4 5 6 7 8 9 10
002040608
11214
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
Time (s) Time (s)Best weights values
minus06minus04
minus02
minus1
Wei
ghts119882119891119894119895
Wei
ghts119882119892119894119895
Second approach update of weights119882119891 Second approach update of weights 119882119892
Figure 11 Weights update of the NN controller via a high-order dynamic neural network (learning step)
0 1 2 3 4 5 6 7 8 9 1010
15
20
25
30
35
40
45
50
Time (s)
Join
t ang
le 1
(deg
)
Second approach tracking performance joint 1
ActualDesired
(a)
ActualDesired
0 1 2 3 4 5 6 7 8 9 1020
25
30
35
40
45
50
Time (s)
Join
t ang
le 2
(deg
)Second approach tracking performance joint 2
(b)
Figure 12 Response of joint angles system equipped with NN controller via a high-order dynamic neural network
In Figure 14 it is easy to see that weights have reallyachieved their best values in the learning step and they remainnearly constant
53 Comparative Study The advantages and limitations ofeach approach of neural network control are presented inTable 2
The run-time performance of each approach is presentedin Table 3
6 Discussion and Conclusion
In this paper two approaches of neural network robot controlwere selected exposed and compared The aim of this
comparative study is to find the performance differencesbetween static and dynamic neural networks in roboticsystems control So one of these two approaches uses a staticneural network the other uses a dynamic neural networkThefirst approach is a direct neural control for improvement of aclassic controller proportional derivative (PD) proposed byLewis [1] it employs a static neural network to approximatethe nonlinearities and uncertainties in the robot dynamicsso that the static neural network is used to compensate thenonlinearities and uncertainties therefore it overcomes somelimitation of the conventional controller PD and improvesits accuracy The second approach is an indirect neuralcontrol via a high-order dynamic neural network proposedby Sanchez et al [7] it employs the dynamic neural networkfor an exact online identification of the robot state and then
10 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 10
050
100150200
Time (s)
Second approach torque 1 at joint 1
minus300minus250minus200minus150minus100minus50
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
50
100
150
Time (s)
Second approach torque 2 at joint 2
minus150
minus100
minus50Torq
ue 2
(Nmiddotm
)
(b)
Figure 13 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
02
04
06
08
1
12
14
minus06minus04
minus02
Wei
ghts119882119891119894119895
Second approach update of weight119882119891
(a)
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
1
2
3
4
5
6
7W
eigh
ts119882119892119894119895
Second approach update of weight119882119892
(b)
Figure 14 Weights update of the NN controller via a high-order dynamic neural network
Table 2 Advantages and limitations of the neural network control approaches
Control strategy Advantages Limitations
NN controller forimprovement of aclassic controller PD
(i) Initialization of the NN weights is arbitrary
No reliable response of the system seen in Figure 7due to the peak in the torques responsefacesdisturbances and robotic uncertainties
(ii) Online learning of the NN(iii) No offline training requirements(iv) Approximation by the neural network of thefunction which gathers the nonlinearities anduncertainties included in the robot dynamics(v) Overcoming some limitation of the conventionalcontroller PD(vi) Guaranteed stability in presence of nonlinearitiesand uncertainties
NN controller via ahigh order dynamicneural network
(i) An exact online identification of the robot statethanks to the dynamic neural network
(i) Initialization of the NN weights is not arbitrary(ii) Offline training requirements to find the suitableinitial NN weights values(ii) Guaranteed stability in presence of nonlinearities
and uncertainties(iii) Reliable response of the system faces disturbancesand robotic uncertainties (Figures 12 and 13)
ISRN Robotics 11
Table 3 Run-time performance of the neural network controlapproaches (on a time-domain 119905 = [0ndash10] sec)
Control strategy Elapsed time(in seconds)
NN controller for improvement of a classiccontroller PD 45262542
NN controller via a high-order dynamic neuralnetwork 44088512
it synthesizes the control law from this information recoveredby this identification
Simulation results under the framework MATLAB ofa two-link robot in 2D environment showed the perfor-mance differences between the two neural network controlapproaches studied Compared to the control with the staticneural network the neural control via dynamic neuralnetwork has significantly better tracking performance hasfaster response time and is more reliable to face disturbancesand robotic uncertainties However the indirect approachrequires offline training to find the suitable initial neuralnetwork weights values contrarily to the direct one in whichthe initialization of the neural network weights is arbitrary
Although it is clear that further experimentation needsto take place simulation results presented here indicate thatdynamic neural networks have demonstrated a very goodpotential for applications in closed loop control of robotmanipulators
References
[1] F L Lewis ldquoNeural network control of robot manipulatorsrdquoIEEE Expert vol 11 no 3 pp 64ndash75 1996
[2] W Zhang N Qi andH Yin ldquoPD control of robotmanipulatorswith uncertainties based on neural networkrdquo in Proceedings ofthe International Conference on Intelligent ComputationTechnol-ogy and Automation (ICICTA rsquo10) pp 884ndash888 May 2010
[3] Y H Kim F L Lewis and D M Dawson ldquoIntelligent optimalcontrol of robotic manipulators using neural networksrdquo Auto-matica vol 36 no 9 pp 1355ndash1364 2000
[4] Z Tang M Yang and Z Pei ldquoSelf-Adaptive PID controlstrategy based on RBF neural network for robot manipulatorrdquoin Proceedings of the 1st International Conference on PervasiveComputing Signal Processing and Applications (PCSPA rsquo10) pp932ndash935 September 2010
[5] D Popescu D Selisteanu and L Popescu ldquoNeural and adaptivecontrol of a rigid link manipulatorrdquo WSEAS Transactions onSystems vol 7 no 6 pp 632ndash641 2008
[6] M A Brdys and G J Kulawski ldquoStable adaptive control withrecurrent networksrdquo Automatica vol 36 no 1 pp 5ndash22 2000
[7] E N Sanchez L J Ricalde R Langari and D ShahmirzadildquoRollover control in heavy vehicles via recurrent high orderneural networksrdquo in Recurrent Neural Networks X Hu and PBalasubramaniam Eds Vienna Austria 2008
[8] F G Rossomando C Soria D Patino and R Carelli ldquoModelreference adaptive control for mobile robots in trajectorytracking using radial basis function neural networksrdquo LatinAmerican Applied Research vol 41 no 2 2010
[9] Z Pei Y Zhang and Z Tang ldquoModel reference adaptivePID control of hydraulic parallel robot based on RBF neural
networkrdquo in Proceedings of the IEEE International Conference onRobotics and Biomimetics (ROBIO rsquo07) pp 1383ndash1387 Decem-ber 2007
[10] M G Zhang andW H Li ldquoSingle neuron PIDmodel referenceadaptive control based on RBF neural networkrdquo in Proceedingsof the International Conference onMachine Learning and Cyber-netics pp 3021ndash3025 August 2006
[11] H X Li and H Deng ldquoAn approximate internal model-basedneural control for unknown nonlinear discrete processesrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 659ndash6702006
[12] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[13] C Kambhampati R J Craddock M Tham and K WarwickldquoInverse model control using recurrent networksrdquoMathematicsand Computers in Simulation vol 51 no 3-4 pp 181ndash199 2000
[14] MWang J Yu M Tan and Q Yang ldquoBack-propagation neuralnetwork based predictive control for biomimetic robotic fishrdquoin Proceedings of the 27th Chinese Control Conference (CCC rsquo08)pp 430ndash434 July 2008
[15] K Kara T E Missoum K E Hemsas and M L HadjilildquoControl of a robotic manipulator using neural network basedpredictive controlrdquo in Proceedings of the 17th IEEE InternationalConference on Electronics Circuits and Systems (ICECS rsquo10) pp1104ndash1107 December 2010
[16] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[17] E B Kosmatopoulos M A Christodoulou and P A IoannouldquoDynamical neural networks that ensure exponential identifi-cation error convergencerdquo Neural Networks vol 10 no 2 pp299ndash314 1997
[18] E N Sanchez J P Perez and L Ricalde ldquoRecurrent neuralcontrol for robot trajectory trackingrdquo in Proceedings of the 15thWorld Congress International Federation of Automatic ControlBarcelona Spain July 2002
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
10 ISRN Robotics
0 1 2 3 4 5 6 7 8 9 10
050
100150200
Time (s)
Second approach torque 1 at joint 1
minus300minus250minus200minus150minus100minus50
Torq
ue 1
(Nmiddotm
)
(a)
0 1 2 3 4 5 6 7 8 9 10
0
50
100
150
Time (s)
Second approach torque 2 at joint 2
minus150
minus100
minus50Torq
ue 2
(Nmiddotm
)
(b)
Figure 13 Response of torques in joint angles system equipped with NN controller via a high-order dynamic neural network
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
02
04
06
08
1
12
14
minus06minus04
minus02
Wei
ghts119882119891119894119895
Second approach update of weight119882119891
(a)
0 1 2 3 4 5 6 7 8 9 10Time (s)
0
1
2
3
4
5
6
7W
eigh
ts119882119892119894119895
Second approach update of weight119882119892
(b)
Figure 14 Weights update of the NN controller via a high-order dynamic neural network
Table 2 Advantages and limitations of the neural network control approaches
Control strategy Advantages Limitations
NN controller forimprovement of aclassic controller PD
(i) Initialization of the NN weights is arbitrary
No reliable response of the system seen in Figure 7due to the peak in the torques responsefacesdisturbances and robotic uncertainties
(ii) Online learning of the NN(iii) No offline training requirements(iv) Approximation by the neural network of thefunction which gathers the nonlinearities anduncertainties included in the robot dynamics(v) Overcoming some limitation of the conventionalcontroller PD(vi) Guaranteed stability in presence of nonlinearitiesand uncertainties
NN controller via ahigh order dynamicneural network
(i) An exact online identification of the robot statethanks to the dynamic neural network
(i) Initialization of the NN weights is not arbitrary(ii) Offline training requirements to find the suitableinitial NN weights values(ii) Guaranteed stability in presence of nonlinearities
and uncertainties(iii) Reliable response of the system faces disturbancesand robotic uncertainties (Figures 12 and 13)
ISRN Robotics 11
Table 3 Run-time performance of the neural network controlapproaches (on a time-domain 119905 = [0ndash10] sec)
Control strategy Elapsed time(in seconds)
NN controller for improvement of a classiccontroller PD 45262542
NN controller via a high-order dynamic neuralnetwork 44088512
it synthesizes the control law from this information recoveredby this identification
Simulation results under the framework MATLAB ofa two-link robot in 2D environment showed the perfor-mance differences between the two neural network controlapproaches studied Compared to the control with the staticneural network the neural control via dynamic neuralnetwork has significantly better tracking performance hasfaster response time and is more reliable to face disturbancesand robotic uncertainties However the indirect approachrequires offline training to find the suitable initial neuralnetwork weights values contrarily to the direct one in whichthe initialization of the neural network weights is arbitrary
Although it is clear that further experimentation needsto take place simulation results presented here indicate thatdynamic neural networks have demonstrated a very goodpotential for applications in closed loop control of robotmanipulators
References
[1] F L Lewis ldquoNeural network control of robot manipulatorsrdquoIEEE Expert vol 11 no 3 pp 64ndash75 1996
[2] W Zhang N Qi andH Yin ldquoPD control of robotmanipulatorswith uncertainties based on neural networkrdquo in Proceedings ofthe International Conference on Intelligent ComputationTechnol-ogy and Automation (ICICTA rsquo10) pp 884ndash888 May 2010
[3] Y H Kim F L Lewis and D M Dawson ldquoIntelligent optimalcontrol of robotic manipulators using neural networksrdquo Auto-matica vol 36 no 9 pp 1355ndash1364 2000
[4] Z Tang M Yang and Z Pei ldquoSelf-Adaptive PID controlstrategy based on RBF neural network for robot manipulatorrdquoin Proceedings of the 1st International Conference on PervasiveComputing Signal Processing and Applications (PCSPA rsquo10) pp932ndash935 September 2010
[5] D Popescu D Selisteanu and L Popescu ldquoNeural and adaptivecontrol of a rigid link manipulatorrdquo WSEAS Transactions onSystems vol 7 no 6 pp 632ndash641 2008
[6] M A Brdys and G J Kulawski ldquoStable adaptive control withrecurrent networksrdquo Automatica vol 36 no 1 pp 5ndash22 2000
[7] E N Sanchez L J Ricalde R Langari and D ShahmirzadildquoRollover control in heavy vehicles via recurrent high orderneural networksrdquo in Recurrent Neural Networks X Hu and PBalasubramaniam Eds Vienna Austria 2008
[8] F G Rossomando C Soria D Patino and R Carelli ldquoModelreference adaptive control for mobile robots in trajectorytracking using radial basis function neural networksrdquo LatinAmerican Applied Research vol 41 no 2 2010
[9] Z Pei Y Zhang and Z Tang ldquoModel reference adaptivePID control of hydraulic parallel robot based on RBF neural
networkrdquo in Proceedings of the IEEE International Conference onRobotics and Biomimetics (ROBIO rsquo07) pp 1383ndash1387 Decem-ber 2007
[10] M G Zhang andW H Li ldquoSingle neuron PIDmodel referenceadaptive control based on RBF neural networkrdquo in Proceedingsof the International Conference onMachine Learning and Cyber-netics pp 3021ndash3025 August 2006
[11] H X Li and H Deng ldquoAn approximate internal model-basedneural control for unknown nonlinear discrete processesrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 659ndash6702006
[12] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[13] C Kambhampati R J Craddock M Tham and K WarwickldquoInverse model control using recurrent networksrdquoMathematicsand Computers in Simulation vol 51 no 3-4 pp 181ndash199 2000
[14] MWang J Yu M Tan and Q Yang ldquoBack-propagation neuralnetwork based predictive control for biomimetic robotic fishrdquoin Proceedings of the 27th Chinese Control Conference (CCC rsquo08)pp 430ndash434 July 2008
[15] K Kara T E Missoum K E Hemsas and M L HadjilildquoControl of a robotic manipulator using neural network basedpredictive controlrdquo in Proceedings of the 17th IEEE InternationalConference on Electronics Circuits and Systems (ICECS rsquo10) pp1104ndash1107 December 2010
[16] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[17] E B Kosmatopoulos M A Christodoulou and P A IoannouldquoDynamical neural networks that ensure exponential identifi-cation error convergencerdquo Neural Networks vol 10 no 2 pp299ndash314 1997
[18] E N Sanchez J P Perez and L Ricalde ldquoRecurrent neuralcontrol for robot trajectory trackingrdquo in Proceedings of the 15thWorld Congress International Federation of Automatic ControlBarcelona Spain July 2002
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
ISRN Robotics 11
Table 3 Run-time performance of the neural network controlapproaches (on a time-domain 119905 = [0ndash10] sec)
Control strategy Elapsed time(in seconds)
NN controller for improvement of a classiccontroller PD 45262542
NN controller via a high-order dynamic neuralnetwork 44088512
it synthesizes the control law from this information recoveredby this identification
Simulation results under the framework MATLAB ofa two-link robot in 2D environment showed the perfor-mance differences between the two neural network controlapproaches studied Compared to the control with the staticneural network the neural control via dynamic neuralnetwork has significantly better tracking performance hasfaster response time and is more reliable to face disturbancesand robotic uncertainties However the indirect approachrequires offline training to find the suitable initial neuralnetwork weights values contrarily to the direct one in whichthe initialization of the neural network weights is arbitrary
Although it is clear that further experimentation needsto take place simulation results presented here indicate thatdynamic neural networks have demonstrated a very goodpotential for applications in closed loop control of robotmanipulators
References
[1] F L Lewis ldquoNeural network control of robot manipulatorsrdquoIEEE Expert vol 11 no 3 pp 64ndash75 1996
[2] W Zhang N Qi andH Yin ldquoPD control of robotmanipulatorswith uncertainties based on neural networkrdquo in Proceedings ofthe International Conference on Intelligent ComputationTechnol-ogy and Automation (ICICTA rsquo10) pp 884ndash888 May 2010
[3] Y H Kim F L Lewis and D M Dawson ldquoIntelligent optimalcontrol of robotic manipulators using neural networksrdquo Auto-matica vol 36 no 9 pp 1355ndash1364 2000
[4] Z Tang M Yang and Z Pei ldquoSelf-Adaptive PID controlstrategy based on RBF neural network for robot manipulatorrdquoin Proceedings of the 1st International Conference on PervasiveComputing Signal Processing and Applications (PCSPA rsquo10) pp932ndash935 September 2010
[5] D Popescu D Selisteanu and L Popescu ldquoNeural and adaptivecontrol of a rigid link manipulatorrdquo WSEAS Transactions onSystems vol 7 no 6 pp 632ndash641 2008
[6] M A Brdys and G J Kulawski ldquoStable adaptive control withrecurrent networksrdquo Automatica vol 36 no 1 pp 5ndash22 2000
[7] E N Sanchez L J Ricalde R Langari and D ShahmirzadildquoRollover control in heavy vehicles via recurrent high orderneural networksrdquo in Recurrent Neural Networks X Hu and PBalasubramaniam Eds Vienna Austria 2008
[8] F G Rossomando C Soria D Patino and R Carelli ldquoModelreference adaptive control for mobile robots in trajectorytracking using radial basis function neural networksrdquo LatinAmerican Applied Research vol 41 no 2 2010
[9] Z Pei Y Zhang and Z Tang ldquoModel reference adaptivePID control of hydraulic parallel robot based on RBF neural
networkrdquo in Proceedings of the IEEE International Conference onRobotics and Biomimetics (ROBIO rsquo07) pp 1383ndash1387 Decem-ber 2007
[10] M G Zhang andW H Li ldquoSingle neuron PIDmodel referenceadaptive control based on RBF neural networkrdquo in Proceedingsof the International Conference onMachine Learning and Cyber-netics pp 3021ndash3025 August 2006
[11] H X Li and H Deng ldquoAn approximate internal model-basedneural control for unknown nonlinear discrete processesrdquo IEEETransactions on Neural Networks vol 17 no 3 pp 659ndash6702006
[12] I Rivals and L Personnaz ldquoNonlinear internal model controlusing neural networks application to processes with delay anddesign issuesrdquo IEEETransactions onNeural Networks vol 11 no1 pp 80ndash90 2000
[13] C Kambhampati R J Craddock M Tham and K WarwickldquoInverse model control using recurrent networksrdquoMathematicsand Computers in Simulation vol 51 no 3-4 pp 181ndash199 2000
[14] MWang J Yu M Tan and Q Yang ldquoBack-propagation neuralnetwork based predictive control for biomimetic robotic fishrdquoin Proceedings of the 27th Chinese Control Conference (CCC rsquo08)pp 430ndash434 July 2008
[15] K Kara T E Missoum K E Hemsas and M L HadjilildquoControl of a robotic manipulator using neural network basedpredictive controlrdquo in Proceedings of the 17th IEEE InternationalConference on Electronics Circuits and Systems (ICECS rsquo10) pp1104ndash1107 December 2010
[16] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[17] E B Kosmatopoulos M A Christodoulou and P A IoannouldquoDynamical neural networks that ensure exponential identifi-cation error convergencerdquo Neural Networks vol 10 no 2 pp299ndash314 1997
[18] E N Sanchez J P Perez and L Ricalde ldquoRecurrent neuralcontrol for robot trajectory trackingrdquo in Proceedings of the 15thWorld Congress International Federation of Automatic ControlBarcelona Spain July 2002
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of