Nn Applied to a Tracking Control System

Embed Size (px)

Citation preview

  • 8/12/2019 Nn Applied to a Tracking Control System

    1/6

    COMPARISON OFT W O NEURAL N E T W O R K M E T H O D SAPPLIED TO A TRACKING CONTROL SYSTEMC l i s t on Lc f r ; i r ic , Scirior M L ' I I I ~ C T ,EEE, ririd Bcdcr Cistcratis

    Department of Electrical Engineering, Catholic University of Valparaiso, ChileFax 56-32-212746. -mail: glefraiic@ais 1 .ucv.cl P.O. B o s 4059,Valparaiso, Chile.

    Abstract- This paper presents tlie comparison of tliebackpropagation neural network and the random researchmethod applied to tlie tracking control of a direcl currentmotor. The neural network control system uses a very sim lescheme and i t can use in the real-time way. T ii stechnique is based on an adaptive distributed architecture.The comparison of the two alternatives is done in thelearning process aiid the performance of the control system.T h e . netiral network is applied as a trackingcontroller, with trameter error trajectories inputs. Th econtroller classifit., :.e feedback error signals and generatesthe approp riate control action to the motor. The controlsystem can follow any arbitrary trajectory, even when thetrajectory is changed to that not used in training process.The neural network i s programmed in a ersonalcomputer. The motor speed 1s sensed with a tacfometer,and the output of the neural network actuates on the fieldvoltage of the motor.

    1. "TRODUCTIONA common industrial problem is to make theoutput of a gs te m to track a iven reference trajecrory. Foresample, to rive.servomotorsf7], to move robe) s joints 161to assemble equipment, etc., require tlie machine followsprescribed trajectory. To get a good trackin performance,the dynamics of controlled systems are us ua l6 simple (e.linear) and known so that the modern control olicy can kapplied. When the structure of the lant IS untnown or theparameter variation is excessive, ti e effectiveness of themodern control diminishes. For esample, when tlie

    environment chan ges widely aiid a fised controller setting isset, the performance of the system is not satisfactory. If i t isused a reasonably accurate model, lie coiitrol algorithmcould .be com putationally intensive that it becomesimpossible to implement it i n a real time controlenvironment.Th e output of the plant, in a tracking systeni, iscontrolled tryin to maintain nearly equal to a desiredreference input. Hn that situation, the output is said to trackthe reference input. In motor drive applications, the niotorshould have to follow a predetermined position or speedtrajectory, without causing excessive stresses to the entiresystem hardware and with no escessive inrush current.Different techniques are used in tracking control:variable structure 151, the sliding mode 141, self tuning andmodel reference adaptive coiltrols 181. The first twoteclini ues rcquire il valid niodcl of the plant beingcontrojed, but llicy are not robust i n [lie scnsc 111;it thecontroller is sciisitive 1 lar e parameter variation and noise.Adaptive controfs are inore effective i ncompensating the inlluence of the structured uncertainty of

    ilie plant, but i t is 1101 clear that it can overcome thatuncertainty. Additionally, the adaptive control schemesrequire ihe iidorrnation about the plant, and may notguara ntee the stability of the system in the resence ofunrnodeled dynamics [7]-[ 8]. The algorithms ogtained aremore complicated, needing escessive computation in real-time. Self tuning controller did not track the reference whenthe plant is, noillinear and must be redesigned if the plant hasto follow different references [9 ) .The application of iieural networks has a highcomputation rate provided, by garallelism, a great degreeof robustness due to the distri utcd representatio n, andthe ability of adaptation, learnin aiid tlie generalization toiinprobx performance [SI-[ IO]. %earning control systemsemploy neural network to learn the characteristics of inverse

    dynamics of controlled systems. Most of them use thedesired response and/or the plan1 output as inputs to theneural networks [ 111. Network training is bascd on-lineobservation of the inputs and oulputs of the plant. Thisoperation could take time, as the case of back-propagationmethod. A model learnin scheme using a simple dynamiciiiodel for the generalize8 learning of the ncural network(81, provides a n eficicrit procedure for learriin tlie lan tdynamics, in an off-line wa . The controllers ok ai ne 8canrespond esibly to unmoderated dynamics and unexpectedsituat ions, by using the learnin and adaptatio n ca abilities,different to the convenlionakcontrollers that i ths to beprogramined to respoiid to the environmental changes.This aper presents the co my ris on of twomethods b as el on neural networks app red to the speedtracking control of a D.C. motor. Tlie system uses asimple real-time control, using feedback error trajectoriesas inputs to the neural network trackin controller, instead oflearnin tlie inverse dynamics of the ant , to determine. thecon tro lkr that erierates the roper control signal to achievetlie desired perfoorinance of t i e speed o the motor.. Jt is notnecessary to idellilly or to learn the motor dynaniics. Th eiiieiliods compared are the backpropagalion neural iietworksand the method based on minimization by random searchteclirii ues. Tlie comparison of he two alternatives is donein t he y earning process and th e performance of the controlsystem.

    2. BACKPROPAGATION NEURAL NETWORKThe neural netw orks used in this work are a threelayer neural network, with the error back-propagationlearning a1 orilhm [13] or error random optirnizationiiietliod of %laty;is learning algoritliin 131. Tlie networkconsists of inpt t, hidden and oulput la crs. Each layer

    contailis roccssiiig clc,ments witli sigmoiciil nonlinearitics.I t Iias sfo wn h a t this net can be used to model anycontinuous nonlinear transforniations. A back-propagationneural-network is used as pattern classifier instead of acting

  • 8/12/2019 Nn Applied to a Tracking Control System

    2/6

    as the inverse of the coiitrollcr plant. This network isfeedfonvard wliere each unit receives inputs only from theunits in the preceding layer. The information that goes to theinput layer inits is recorded into internal representation andthe outputs are enerated by that1 representation rather tlianb the inputs. T i e network converts the inputs according tod e connection weights. These weights are adjustcd d u mthe learning process, to niinimize the sum of the square8error s between the desired outputs a nd the network o utputs.Th e errors are ropagated back to assign credits to eachconnection weigit.

    Input Hidden OutputLayer Layer LayerFig. 1.- Neural Network.

    A back-propagation neural. network is shown in Fig.1 . The total inputs to unit J i n the hidden layers or input k, inthe output layer isn e t r = C w rs Y s r = k , j s = j , i 1)S

    w1iere.i refers to the,input la 'ers; \vrs is the weight of tliesth unit to the n h unit and bs represents the out u t of uni ts in the hidden and input layers. A sigmoidal nonEnearity isapplied to the each unit r to obtain the output Y,.IYs= f(net, ) = (2)1 + e s p [ (net, - 8, ) / 8 ]

    where 8 serve as a tliresliold of unit r and 8 detcriiiiries theslope o t t h e activation function f. I t is assumed that 0,=1Each layers only coniinunicntes w i t h all successive layers,because of there is no feedback.In the learning process, the network is a pair ofpatterns: an in ut pattern and a corresponding desiredout ut pattern. Learning comprises clianging the weiglitsa n 8 hresholds to minimize the mean squared error theactual outputs and the desired out ut patte nis in a gradientdescent manne r. Th e activitv of each unit is ,propagatedfonvard throug h. each layer of the i ctwork by using I ) , (2).Then, the resulting output attern IS coin ared with thedesired output pattern, a n i an error 6k Or each unit iScalculated as:

    (3)where tk is the desired out u t and Yk is the actual output.The error at out ut is bacf-propagated recursively to eachlower layer as folPow:

    (4)

    Biickpropiigatioii iiictliod tiscs tlic S ~ C C ~ C S ~C S C C I I ~method i n tlic learning process of the we1 jhts of the neuralnetwork. n that p roccss each srep size is atways. constant qand often convergcs lo a local minimum, but i t cannot bdensured IIieoretically. Th e value of each weight andtliresliold has lo be incrcnientally adjuslcd proportioiially tothe contribution of each unit to tlie total error. The changei n each weight and threshold is calculated as :Awrs (I + 1) = q6,YS + CLAW,, ( I ) r = k, j, s = j i

    where q controls the rate of learn ing and denotes thenumber of times for which a set of input patterns has beenpresented to the network. Th e parameter determines theeffect of previous weight changes on the current directionof movement in weight space.I n this work, it has been determined aninitial ranqe value of the connection ncrghts of the networkbetween to + I , to reduce the numbcr of ilerations and toprevent the hiddeii units from acquiring identical weightsduring Ic;iriiing. Th e backpropagatroil USCS a neuralnetwork with sis neurons each one, due to the words neededhas to have six bits. Th e quantity of hidden neurons ischosen according to processing time of calculations. Thesam e scheme is employed in tlie random optimizationmethod for training, that i t re uires a number of weight lessthan one hundred to ensure a 100 of convergence.

    5 )

    3 .- T H E MODIFIED J U N D O M OPTIMIZATIONMETHOD OF MATYAS

    Backpropagation is one of the most widely usedmelhod to adapt neural nciworks for pattern classification.However, n iinportaiit limitation is t h a t i t sometimes fallsinto a 10c;il miniintiin of error function. The randomoptimization metliod of Matyas and its modified algorithmare utilized to learn the weights and parameters in a neuralnetwork. This algoritlim is used to find the global minimumof error fuiiction of the neural network [ 11.Given a function f from RI1 to R and S a subset ofR", a point s hat minimizes f on S or at least that yields anacceptable ap rosiniation of tlie infimum of f on S. Thealgorithrii is Eased on the modified random optimizationmethod proposed by Matyas [1]:[3].Le1 the nest equations rcprcsciit the inputloutput

    relations of the neural iielwork:sip(n )= f i (ZIp 0,)) 6 )zip (w) = zwij yip w)j (7)

    wliere si (w) and yi (w) are the oul t i t s of tlie i th iicurotiof tlie s - f i t l i layer, a i 4 t ~ i e t ~ i utput Froni tlie (s-l)tii la crcorresponding to the ptli input pattern, respective1 . Aieweight of tlie iicural nctwork is wij (vector w) and t o ) is anondecresiiig sniooth function.The algoritlini of the modified r m d m optiniizationiiietliod roposcd by ,Matyas, is tlie following:Step I : elect die iiiitial poiii t, vecror x(0) i n S and set k=O.Let M be tlie inas imum number of steps.

    Stcp 2: Gcncrate Gaussian random vector ,(k).If r(k) + ,(k) E X o to Step 3. Othenvise, go to Step 4.

    67

  • 8/12/2019 Nn Applied to a Tracking Control System

    3/6

    Step 3:If f W - 5(k)) < f(x(k)),let x(k+l) = x(k) + c k)and b(k+l) = 0,4*c(k) + 0,2*b(k)If f ( W + 5(k)) 2 ( x00 ) and f ( W + 5(k)) < f ( W ,let x k +1)= x k) {(k) and b(k+l) = b(k) 0,4*{(k)othenvise, let x(k+ l) = x(k) and b(k+ l) = O,S*b(k) b( 0Ste 4: If k = M, stop the calculation . If k c M, let k = k + Ia n t g o to s t ep 2.

    This method ensures convergence to a globalminimum of t l ie ob'ective fu nction, with robability I on theconipact set. 11 carculates the value of Sa t function at tliereverse side x(k) k) if it fails to improve the currentthe original ha as method. The method exploits theinformation of thex ias b k) tliat is the cen ter of the G aussianrandom vector {(k) at the kth step. The method is us ehlwhen the dimension of the variables becomes very large [2].

    The main ,bjective is to find a weight vector \vwhich gives small ... :es of tIie total error function E(w) asthe objective hnc tio n to be minimized.

    value of the ob ective I nction, eshibits faster convergence of

    Tachomeler

    E(w ) = C Ep wP

    +

    U I I

    I I IFig. 2.- Block diagram of the system.

    The modified random optimization methodproposed by Matyas, applied to the learning of theweights of the neural network, has to consider the weightvector and the total error function, as s and the objectivefunction f(x), res ectively. The a1 orithin converges withprobability 1 on t le compact se t . d w ev er , the se t of IV isnot necessanly compact. and these parameters could takearbitrarily lar e valucs. This nieans Illat the niethods enslireconvergence if i t is coifin ed tlie calculations to a coin actregion, for example, the region in which the absolute vapuesof all components of the iv vector are below 100. In thiswork, the neural network has three layers of six neuronseach one that requires 72 weiglirs. This method is used i n ancural network applied l o t he speed trackiiig control of aDC motor, and i t is compared to the back propagationmethod.

    4.- NEURAL NE T WORK TRACKING CONTROLSYSTEMThe block diagra.m o f the neural iietwork trackingcontrol systems is shown in Fig. 2. The plant is a DC motorwith speed controlled by armature. The speed of the motoris sensed ti a tachom eter tliat send s the inform tion to acomputer tfirough an Input Analog Module. This Moduledigitizes the signal to +5 volts normalized. Th e wordreceived by tlie com pute r is com a red with the rcferenc e

    word, determining the error, wkch is the input to theneural network.The ncural network has to be trained to recognizethe error ranges, 10 obtain a n output pattern and todetenniiie delIi1 Vc 10 be addcd or to subtract to thereference. This information is convertcd to a voltage andsent to tlie driver coiitrol card that controls the DC motor.Figure 3 shows a detail of the trackin control systein. Thecoinmunicatioii card is based on a %I3 1 microcomputer,coin osed by ai1 In ut Analog Module, an Out u t AnalogM oL le and the bicroc om uter Module. 'd e card .isconnected to a 40 M H z , 80 86 compuler, through serialinput. Tlie purpose of this card is to have an intelligentinterface bctween the coin uier, and the sensor and llicactualor. The driver contror card, the actuator, consists ofa n amplifier, n digital analog converter, an EPROMmemory, a counter, a comparator, a zero crossin detector,a n isolated circuit, and a thiristor ower unit . io th cards

    have bccn developcd a t the &lliolic Univ ersity ofValparaiso Labs.I

    I DCMotor +I

    II Controlcard

    II I

    Fig. 3.- Neural network tracking control system,Train ing involves using the error signals, e,between the plant output signa ls and the desired signalsas i,nputs to the neural networks. Th e neural nelworktracking controller is shown in Fig. 4, contains four units :pre rocessiiig, neural nelwork classifier, look-up table,ant ser vo dr ive u n i t . Th e preprocessing part will scale theerror signal into the range of -1 to 1 and partition i t inloseveral groups. Each ,group clusters ,those error's signals forw lw h ail appropriate control action would correct. Thiserror serves as inpuls to the neural network classificr.The neural network classifier is a feedfonvardthree-layer backpropagation netwo rk, which consists of theinput layer w i t h numbers of units dependin on the

    and the number of trajeclories to ow; one: &len layer wi th numbers of' units and sis neurons in theourput layer.lication

    68

  • 8/12/2019 Nn Applied to a Tracking Control System

    4/6

    reprocessing NetworkClossifier

    Fig. 4.- eural Network tracking controller.After tlie learnin process, it erforms the

    function of classification andfior mapping. he outputs ofthe neural netclassifier will be rounded off into 1 or 0, where tlic 1indicates abnorm al case. Tlie look-up table will determ inewhether to increase or to reduce tlie control signal based onthe out u t decision produced from the neural net classifier.The T ble I relate the ,possible error ranges and theircorresponding actions. This signals IS ed through the servodrive unit to ener,?te he adequate signal Va to the motor tocorrect the ieviation. The senlo uni t contains the D/Aconverter, amplifier, trig er, and SCR. he error si nal isscaled down to values w t h n he range -1 to + I accoriing tothe specified operation,ran e.. Tlie coeflicients kl , k2, k3 arechoosen based on the siniuktton results.Table I

    Error Output PatternRanges e[ - l e , 0.51 I 0 0 0 0 0[-0,5.,-0.2] 0 1 0 0 0 0[-0.2,0.05J 0 0 1 0 0 0[0.05.+0.5] 0 0 0 0 0 010.2, 0.51 0 0 0 0 1 010.5 1. ] 0 0 0 0 0 1+0.05,0.2] 0 0 I 0 0

    ControlAct o i iVa- k3Va- k2Va- kl

    v aVa+k 1Va+k2Va+k3

    5.- TRAINNING COMPARISONThe weight values obtained during the training oftlie neural network based on Matyas method, have thecharacteristic the error associated to tlie input ai id outputpattern is closed to zero per cent, as i t is shown in T able 2.The final error, for a back pro agation neuralnetwork, is equal to the dcsircd error, be&v 1%. In themodified randoni optiniizalioii method of M a y a s , tlie finrilerror is less than tlie desired error, observing h below 0.5%desired error. The error corresponds to the mean squareerror.The Iliilsiniltlli error desired is given by the userbefore starting the traitling. Th e f ina l error is the error of tlieneural network after tlic trainin process equal or less thantlie desired error. The iiiiinfkrs of iteratiotis are thealgorithm steps to reach the cond ition final error equal orless than the desired error. Table 3 shows the same forbackpropagation neural network.The nuriiber of iterations for reacliing the niasimumerror of 0 5 , bnckpropagntion neiir;d network needs 94,

    and rilildolli optinitzalioii t1lctliod o f M;ity:is iiccds 95 Theadvs i i tage of tlie M;ityas nicihod is to obtain less f inalerror l l int backpro I rilioii nic~liod.Tlie first one. sicps lismeth od oes rapi$;io the min imu m, but i t stays i n thelimits. Tie other nielhod ensiires the global ininimum.

    TAl3LE 2. Modified method of MatyasYOMasiniutrierror dcsircd5I0.90.80 .70.60 .50.40 . 3

    0.20.1

    Y inal errorobtaincd1.1 10.760.2750.480.0980.483000.0400

    Nurnhcr ofi tcra tions2250292295101926

    For avoiding oscillatioiis i r i the output, anautomatic reset i s done when the nuinber of iterations i strea,ter tlian 20 0. This problem oc curs witli a fre uency of 15/o i n tlie r;lridom o timim iion mctliod of R a t y a s . Inbackpropag;ittori iieurarnetwo rk, the CPU time increases i na gcoinetrical way, to obtain a logic zero or logic one witherrors below 0.2 . In random optimization method ofMatyas, the CPU time is less than backpropagation methodfor errors below 3 YO ith the random optimization methodof Matyas is possible to search the adequate weights andthe neural network, in a fast way than the&ackpropagation method.ammeters ofTABLE 3. Back propagalion method

    10Milrinlumerror dcsircd5I0.90.80.70.60.50.40.30.20.1

    YO inal errorob (ai ictl4 8710.90.80.70.60.50.40.30.20. I

    Num bcr ofiterations235256GI687894i2-1I803501288The selections of the patleriis for the error rangesare based on the weight and bias for every pair of input-output pattern of tlie network. 'The pattern used in the

    training is shown in the table 4.Table 4. - Iterations and errors in every pallern.

    Pit tt c rn100000. 01000000 10000000000001000000 10000001

    Iterations Errorb;icl< )rap Y0 .30.380183 0.3181 0.3I80 0.3I83 0.3180 0.3

    180I t era ioiiMatyas2025511814783102

    ErrorY0.00.00.00.00.00.00.0

    A data base is created wit11 tlie in forillctlion of theweights and biases for every pair ai' input-output pattern ofthe network. Tlie output pattern is given for the ouiput layerof 11ic nci\vork, dcpcnding 11 t l ic crror V~IIIC.TIICogic zeroand lie logic oiic are considering for the output word at theo i i tpu t of llic ncural network, rclaled witli the ariiialurevoltage. Tlie oul ut word of the net has 6 bits with asequence of o ancl'oiily one 1

    69

  • 8/12/2019 Nn Applied to a Tracking Control System

    5/6

    2500Set point 2700 rom

    0f l o

    1 1019283746666 4738291 1010111213141 60 9 8 7 6 6 4

    T i m XlOm

    U 1m00 1mDn W

    Fig. 5 . - Tracking control systems with backpropagationneural network controller.

    Sepo in t from 2700 to 2300 pm

    3000 T Ih_ _h __h _2500 -.k 2 . S e t P O n t 2700 pm1500

    t9 500

    Fig. 6.- Trackiiig control systems with neural modifiedrandom optiniization method of Matyas network controller.3o00, 1

    31 6 9 13 17 21 26 29 33 37 41 46 49 63 67 61

    Tl im X l h

    Fig. 7.- Tracking control systems with backpropagationneural network controllerFig. 8.- Tra ckin g control systems with neural modifiedriIlldolil op tim izil tlo ~~iictliod of Mntyils nctwork controller.

    S e t p o i n t f r o m 270010 2 3 0 0 r p m

    6.- PERFORMANCE OF TH E CONTROL SYSTEM.To show the perforniaiice of the neural networktracking control system, it is applied a speed set oint, achange of set point, and a disturbance in the shad) of themolor. To compare the neural network controllerperformance, the trajectory tracking control of the motorwere obtained.Fig. 5 and Fig. 6 shows the output of the s stemwith each ncural network, backpropagation nicthodl andniodificd raJid0in optimization method, of Matyas,

    respcc1ivel . It is observed the second one IS more sniootlithan (lie h s t one. The peaks due to the backpropaneural network, ap ears when the algorithm excee%%:time to arrive to the grobal minimum.Fig. 7 and Fig. 8 shows the performance of thesystem with the different neural network when the set pointis changed from 2700 rpm to 2300 'pm, Both arrive to thedesired value, i n a smooth way with the neural networkbascd on the modified random optimization mclhod ofMatyas.Fig. 9 and Fig. 10 shows the reaction of the controlsystem to i1 disturbance applied to the shaft of the motor, at2700 rptii set point. I t is observed that the control systembased on backprop ation neural network acts in a slow wayi n comparison witk the control system b ase d. on neuralnetwork based on the modified random optimization method

    of Matyas.

    Disturbance at 27001pm11*n.*11 1

    D 1500

    1 11 21 3141 61 61 7181 91 1011 1213141616171 1 1 1 1 1 ? 1

    T i m XlOm

    Fig. 9.- Tracking control systems with backpropagationneural network controller.

    3000c 2500 l - k

    000

    f loo0Dis turbance a t2700rpm

    U 15mIU I500

    T i m e xlOmsFig. 10.- Trackiiig control systems with neural modifiedrandom optimization inetliod of Matyas nelwork coniroller.

    70

  • 8/12/2019 Nn Applied to a Tracking Control System

    6/6

    7.- CONCLUSIONSI t tias been presented the coinparison of twoniethods based OII rieural netsvorks applie d to the specdtrackin control of a direct current niotor. The system usesa s im p6 re a -time control, using feedback error trajccloriesas inputs l o [lie ncttral nctwork tracking controllcr, l odetermine [lie controller that generates the proper controlsignal to achieve the desired performance of tlie speed of themotor. It is not necessary to identify or to learn the motordynamics. Tlie neural network is prograninled in a personalcomputer, the motor spced is scnscd \v i l l i a tachometer, andthe output of the neural network actuates on the fieldvoltage of the motor. Tlie methods compared are thebackpro agation neural networks and the method based onmo dlf iei random optimization method of Matyas.In tlie learning process, the backpropagationmethod. exhibits the advantage IO converge to theminimurn, with errors more l l ian 1 , but i t requires anescessive amount of time to firld tlie global minimum of thetotal error function and sometimcs falls in local niininiuni.The choice of q affccts, an iniportant way, the learniiispeed. In the niodified random , optimization methods. oMatyas is able to find the weiglits and param eters i n aneural network, fasler than the backpropagation method.This method permits to find the global iiiininiuin of the totalerror function of neural nctivork. The value of the varraiice

    o has a crucial effect on tlie learning speed.Tlie output of the system with neural nehvork,

    backpropa ation method and niodified random optimizationmethod of ba t ya s , are similar: bolli rcacli tlie set oint, butdue to the backpropa ation neural network, appears whenthe algorithm escee& the time to arrive to the globalnirnimuni. The reaclion of the control system to adisturbance applied IO the shaft of the niotor, in the coiilrolystem based on backpropagatton neural network acts in as ow way in coinparison with the control system based onneural network based on the niodificd random optiniizationmethod of Malyas. Both s y s t e m s can follow any arbitrarilytrajcctory even \\.lien the dcsircd tr;ijectory is changed tothat not used i n tlie training.

    the second one is iiiore smootli t l i a n the first orie. e pcaks

    REFERENCES11 ) .Baba N., A New Approach for finding the lobalnunimum of error function of neural networks. ieuralNctwork, vol. 2, 11'5, pp, 367-373, 1989.121 Solis F and Wets R., Minimization by random searchteclini ues. Mathem atics of Operations research, vol. 6, no 1,Feb. 1 h 1 .31 Matyas J , R;iiidooi UptlIiiiziitioli. Autoniatic and ReinoleLontrol. vol. 2ti. pp246-253, 1965.

    [ 4 J Hashinioto H. , Maruyaoia K., H:iriisliilii;i F; "Amicro{rocessor-b;iscd robot nianip iilator coiilrol with slidingmode. IEEE Trans. Ind. Electronics, vol. 1E-34, pp 11-18,1987.I S ] El-Sh;irk;i\vi M. , Iiuiirig C., "A variable s,mcluredtrackin of the itiotor for Iiiglr pcrformance ap Iications",IEEE #an,. Energ y Conversion. vol. 4, pp. 643-& 0, 1983.

    161 Hsia T.C., Lasky T., and Guo Z., "Robust independentoint controller design for industrial robot inanipulator",iE E E Trans. lnd. Electronics, vol. IES-38, pp. 21-25, 1991.171 El-Sliarkawi M., Aklierraz M., "Tracking controltcchniqiie for induction molors , IEEE Trans. EncrgyConvcrsion, vol. 4, pp. 81-87, 1989.[81 Nniloh H., Tadakuma S., "Microprocessor basedad ustnblc specd dc motor d rives using modcl referen cea d t ive controI~~,EEE Trans. Industry applications, vol.IA-43, pp. 313-318, 1991.191 Kraft I.G., Campagna D.P., A comparison betweenCMAC neural network control and two traditional adaptivecontrol systems , IEEE Trans. Control Systems Mag. vol.CSS-IO, no 3, pp. 36-43, 1990.[ lo] Ozaki T.. et ai, "Trajectory control of roboticmanip ulalors usin neural networks", IEEE Trans. Ind.Electronics, vol. IEt-38 , pp. 195-202, 1991.[ I 1 1 Yamada T., Yabuta T. "An extension of neural networkdirect controller", IEEE Trans. Control Systems Mag. vol.CSS-8, no 2, pp. 17-21, 1988.1 1 2 1 Pa0 Y. , "Ada live Pattern Reco nition and Neuralnetworks", Addison Lesle y Pub. Co., U f A , 1989.

    131 Tai H. , Wang J., ASshenayi K., "A neural network-based tracking control systerrr", IEEE Trans. Ind.Elcctronics, vol. IES-38, pp. 195-202, 1991.[ 141 Narpat, Gelilot, Alsiiia P.J.,"A conipaison of controlstrategies of robotic iiianipulators using neural networks",IEEE Trans. on Ind. Electron. Coiilerence IECON pp. 688-693, 1992.[ 15) Lefranc G., Zazopulos J. , Ruz R., "Reconociniiento deimagenes geometricas simples niediante redes neuralesAsoci itiya Hop field". Proce edin s X Congreso Chileno deIngenieria hldctrica. Valdivia 1963.

    7 1