16
466 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO. 2, MARCH 2007 A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems Zeng-Guang Hou, Member, IEEE, Madan M. Gupta, Life Fellow, IEEE, Peter N. Nikiforuk, Min Tan, Member, IEEE, and Long Cheng Abstract—A recurrent neural network for the optimal control of a group of interconnected dynamic systems is presented in this paper. On the basis of decomposition and coordination strategy for interconnected dynamic systems, the proposed neural network has a two-level hierarchical structure: several local optimization sub- networks at the lower level and one coordination subnetwork at the upper level. A goal-coordination method is used to coordinate the interactions between the subsystems. By nesting the dynamic equa- tions of the subsystems into their corresponding local optimization subnetworks, the number of dimensions of the neural network can be reduced significantly. Furthermore, the subnetworks at both the lower and upper levels can work concurrently. Therefore, the com- putation efficiency, in comparison with the consecutive executions of numerical algorithms on digital computers, is increased dramat- ically. The proposed method is extended to the case where the con- trol inputs of the subsystems are bounded. The stability analysis shows that the proposed neural network is asymptotically stable. Finally, an example is presented which demonstrates the satisfac- tory performance of the neural network. Index Terms—Dynamic neural networks, goal coordination, hierarchical control, interconnected systems, large-scale systems, neural networks, optimal control, optimization, recurrent neural networks. I. INTRODUCTION A LARGE-SCALE system can be described as a complex system which is composed of a number of subsystems each serving some particular functions and sharing some common resources, and each governed by some interrelated goals and constraints [1], [2]. In many systems, such as data networks, electric power, and transportation, the interactions between the subsystems are frequently different and, thus, there are different methods that are used for dealing with these interconnected sys- tems. One of the most commonly used methods is the hierar- chical control scheme which is based on a decomposition and Manuscript received June 12, 2005; revised July 4, 2006; accepted July 21, 2006. This work was supported in part by the National Natural Science Foun- dation of China under Grants 60205004, 50475179, and 60334020, the Na- tional Basic Research Program (973) of China under Grant 2002CB312200, the Hi-Tech R&D Program (863) of China under Grants 2002AA423160 and 2005AA420040, the Science and Technology New Star Program of Beijing under Grant H020820780130, and the Natural Sciences and Engineering Re- search Council (NSERC) of Canada. Z.-G. Hou, M. Tan, and L. Cheng are with the Key Laboratory of Com- plex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing 100080, China (e-mail: [email protected]; [email protected]; [email protected]). M. M. Gupta and P. N. Nikiforuk are with the Intelligent Systems Research Laboratory (ISRL), College of Engineering, University of Saskatchewan, Saska- toon, SK S7N 5A9, Canada (e-mail: [email protected]; [email protected]; peter. [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNN.2006.885040 coordination strategy [3], where an interconnected large-scale system is decomposed into a set of subsystems. These subsys- tems are then optimized and controlled, and the respective in- teractions are coordinated at the different levels. In theory, any number of hierarchical control levels may be used. In practice, however, a two-level hierarchical control ar- chitecture is usually adopted for reasons of simplicity and re- liability. During the past few decades, much work has been done on multilevel hierarchical architectures, dealing with both steady-state and dynamic systems [2], [4], [5]. However, most of these methods employ numerical iterative algorithms and may encounter serious speed bottlenecks due to the serial nature of the digital computers employed. More recently, neural networks have been used as a new com- putation tool in areas such as systems modeling and control, signal and image processing, and optimization. Various types of neural network models with different properties have been pro- posed. Among them, the Hopfield-type neural network [6], [7] is one of the most widely used. These works and that of others have led to a new avenue of computation—computation with dynamic neural networks [8], [9]. A major advantage of com- putation with dynamic neural networks is that it is executed con- tinuously rather than iteratively. Thus, due to their parallel na- ture, neural networks can solve difficult optimization problems more efficiently. In addition, such networks can be implemented using very large-scale integration (VLSI) technology at a more reasonable cost [9]–[12]. The neural network approach appears, therefore, to be much more attractive and it is for this reason that it has led to much research on the use of Hopfield-type neural networks for solving various types of optimizations in fields such as constrained and unconstrained optimization [13], [14], the shortest path for internetwork routing [15], support vector machine training [16], graph coloring [17], image processing [18], control systems design [19], [20], and torque minimiza- tion of redundant manipulators [21]. However, it is to be noted that in most of this work static optimization problems were con- sidered, where time is not a parameter. In [22], a neural network scheme was proposed for two-level system optimization problems, where a multilayer perceptron was used at the upper level, while Hopfield neural networks were used for optimization of local subsystems at the lower level. However, this scheme has a disadvantage of slow com- putation speed and the convergence is not guaranteed. In [23], a dynamic neural network model was designed for discrete-time large-scale systems. This dynamic neural network model can solve a large-scale system with high speed but it has a higher dimension which is usually undesirable in practice. To control interconnected subsystems, Nardi et al. presented a feedback liberalization method using single hidden-layer neural networks 1045-9227/$25.00 © 2007 IEEE

466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

Embed Size (px)

Citation preview

Page 1: 466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

466 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO. 2, MARCH 2007

A Recurrent Neural Network for Hierarchical Controlof Interconnected Dynamic Systems

Zeng-Guang Hou, Member, IEEE, Madan M. Gupta, Life Fellow, IEEE, Peter N. Nikiforuk, Min Tan, Member, IEEE,and Long Cheng

Abstract—A recurrent neural network for the optimal controlof a group of interconnected dynamic systems is presented in thispaper. On the basis of decomposition and coordination strategy forinterconnected dynamic systems, the proposed neural network hasa two-level hierarchical structure: several local optimization sub-networks at the lower level and one coordination subnetwork at theupper level. A goal-coordination method is used to coordinate theinteractions between the subsystems. By nesting the dynamic equa-tions of the subsystems into their corresponding local optimizationsubnetworks, the number of dimensions of the neural network canbe reduced significantly. Furthermore, the subnetworks at both thelower and upper levels can work concurrently. Therefore, the com-putation efficiency, in comparison with the consecutive executionsof numerical algorithms on digital computers, is increased dramat-ically. The proposed method is extended to the case where the con-trol inputs of the subsystems are bounded. The stability analysisshows that the proposed neural network is asymptotically stable.Finally, an example is presented which demonstrates the satisfac-tory performance of the neural network.

Index Terms—Dynamic neural networks, goal coordination,hierarchical control, interconnected systems, large-scale systems,neural networks, optimal control, optimization, recurrent neuralnetworks.

I. INTRODUCTION

ALARGE-SCALE system can be described as a complexsystem which is composed of a number of subsystems each

serving some particular functions and sharing some commonresources, and each governed by some interrelated goals andconstraints [1], [2]. In many systems, such as data networks,electric power, and transportation, the interactions between thesubsystems are frequently different and, thus, there are differentmethods that are used for dealing with these interconnected sys-tems. One of the most commonly used methods is the hierar-chical control scheme which is based on a decomposition and

Manuscript received June 12, 2005; revised July 4, 2006; accepted July 21,2006. This work was supported in part by the National Natural Science Foun-dation of China under Grants 60205004, 50475179, and 60334020, the Na-tional Basic Research Program (973) of China under Grant 2002CB312200,the Hi-Tech R&D Program (863) of China under Grants 2002AA423160 and2005AA420040, the Science and Technology New Star Program of Beijingunder Grant H020820780130, and the Natural Sciences and Engineering Re-search Council (NSERC) of Canada.

Z.-G. Hou, M. Tan, and L. Cheng are with the Key Laboratory of Com-plex Systems and Intelligence Science, Institute of Automation, ChineseAcademy of Sciences, Beijing 100080, China (e-mail: [email protected];[email protected]; [email protected]).

M. M. Gupta and P. N. Nikiforuk are with the Intelligent Systems ResearchLaboratory (ISRL), College of Engineering, University of Saskatchewan, Saska-toon, SK S7N 5A9, Canada (e-mail: [email protected]; [email protected]; [email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TNN.2006.885040

coordination strategy [3], where an interconnected large-scalesystem is decomposed into a set of subsystems. These subsys-tems are then optimized and controlled, and the respective in-teractions are coordinated at the different levels.

In theory, any number of hierarchical control levels may beused. In practice, however, a two-level hierarchical control ar-chitecture is usually adopted for reasons of simplicity and re-liability. During the past few decades, much work has beendone on multilevel hierarchical architectures, dealing with bothsteady-state and dynamic systems [2], [4], [5]. However, most ofthese methods employ numerical iterative algorithms and mayencounter serious speed bottlenecks due to the serial nature ofthe digital computers employed.

More recently, neural networks have been used as a new com-putation tool in areas such as systems modeling and control,signal and image processing, and optimization. Various types ofneural network models with different properties have been pro-posed. Among them, the Hopfield-type neural network [6], [7]is one of the most widely used. These works and that of othershave led to a new avenue of computation—computation withdynamic neural networks [8], [9]. A major advantage of com-putation with dynamic neural networks is that it is executed con-tinuously rather than iteratively. Thus, due to their parallel na-ture, neural networks can solve difficult optimization problemsmore efficiently. In addition, such networks can be implementedusing very large-scale integration (VLSI) technology at a morereasonable cost [9]–[12]. The neural network approach appears,therefore, to be much more attractive and it is for this reason thatit has led to much research on the use of Hopfield-type neuralnetworks for solving various types of optimizations in fieldssuch as constrained and unconstrained optimization [13], [14],the shortest path for internetwork routing [15], support vectormachine training [16], graph coloring [17], image processing[18], control systems design [19], [20], and torque minimiza-tion of redundant manipulators [21]. However, it is to be notedthat in most of this work static optimization problems were con-sidered, where time is not a parameter.

In [22], a neural network scheme was proposed for two-levelsystem optimization problems, where a multilayer perceptronwas used at the upper level, while Hopfield neural networkswere used for optimization of local subsystems at the lowerlevel. However, this scheme has a disadvantage of slow com-putation speed and the convergence is not guaranteed. In [23], adynamic neural network model was designed for discrete-timelarge-scale systems. This dynamic neural network model cansolve a large-scale system with high speed but it has a higherdimension which is usually undesirable in practice. To controlinterconnected subsystems, Nardi et al. presented a feedbackliberalization method using single hidden-layer neural networks

1045-9227/$25.00 © 2007 IEEE

Page 2: 466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

HOU et al.: RECURRENT NEURAL NETWORK FOR CONTROL OF DYNAMIC SYSTEMS 467

[25]. However, this method is flawed by its semiglobal stability.In [24], a decentralized neural network controller was proposedfor a class of large-scale systems with interconnections whereneural networks were used to approximate the unknown subsys-tems and interactions. Shortly afterwards, improvements of theresults given in [24] were made by Zhang et al. [26] and Liu etal. [27]. In [26], sliding modes were incorporated, while in [27],the backstepping method was combined with neural networks.However, it is noted that no optimal objective is considered andonly semiglobal stability can be obtained in [24]–[27].

A number of papers on the optimization of dynamic systemsusing neural networks have been published. Di Febbraro et al.,for example, proposed a feedforward neural network for the op-timal control of freeway systems [28]. It is noted that feedforwardneural networks usually need offline learning and the learningspeed is usually slow, and this problem is still being investigatedby many researchers [29]. Liao presented a recurrent neural net-work for -stage optimal control problems [30]. However, therigorous mathematical analysis of the stability of the proposedneuralnetwork in[30]wasstill anunsolved issue.Theseandotherstudies were based, however, on the centralized control schemes,which are not always capable of controlling large-scale systems.As an alternative, Hou et al. proposed a neural network methodfor the hierarchical optimization of large-scale systems wheresystems were considered in the steady-state [31]. Hou also pre-sented a neural network model for hierarchical optimization oflarge-scale dynamic systems in which the coordination methodthat was used was of an interaction prediction type [32].

In this paper, a recurrent neural network scheme is proposedfor the hierarchical control of interconnected large-scale dy-namic systems, which overcomes some of the aforementionedproblems. The proposed recurrent neural network is composedof the local optimization subnetworks and the coordinationsubnetwork. The proposed hierarchical neural network uses adifferent coordination approach, namely a goal-coordinationapproach, which differs fundamentally from the neural networkscheme proposed in [32] where interaction prediction was usedas the coordination approach. By the conventional numericalcomputing methods, the goal-coordination approach is usuallyinferior to the interaction prediction approach and, thus, lesspopular due to the poor convergence and singularity problems[2], [3]. However, by the recurrent neural work approach, theseproblems no longer exist. In this paper, the designed hierarchicalneural network based on the goal-coordination approach isverified to be globally stable and the computation speed can bechanged by setting the values of the time constant matrices ofthe neural network equations.

Moreover, in this proposed neural network method, the con-straints imposed by the dynamic equations are treated in a nestingmanner [20], [32]. This nesting approach not only overcomes thedifficulty of solving the constraints of the dynamic equations,but also decreases the dimension of the dynamic neural network.Hence, since the local optimization subnetworks and the coordi-nation subnetwork work in parallel to achieve an optimal solutionfor the original problem, this proposed method is much more ef-ficient than those based on numerical algorithms. The proposedmethod is also extended to the case where interconnected subsys-tems have bounded control inputs.

In Section II, a description of the problem is given. InSection III, a detailed design procedure for the neural networkis given, and a stability analysis of the dynamic neural networkis presented in Appendix I. Functional block diagrams of theproposed neural network are provided in Section IV. Boundaryconstraints on control inputs are considered in Section V. InSection VI, the results of some simulation studies are givenwhich confirm the potential of the proposed recurrent neuralnetwork design approach. Finally, concluding remarks arepresented in Section VII.

II. PROBLEM FORMULATION

In this paper, a recurrent neural network approach is proposedfor studying the problem of synthesizing hierarchical controlstructures for large-scale interconnected dynamic systems. Dueto the fact that many engineering problems can be described bya linear system in the neighborhood of their operating points,the large-scale system control problem considered in this paperis represented by a linear dynamic model with a quadratic per-formance index defining the overall goal of hierarchical control.

The large-scale system considered in this paper is composedof interconnected dynamic subsystems with state equationsexpressed as

(1)

and with the interactions between various subsystems ex-pressed as

(2)

where is the state vector of the th subsystem and, is the control vector of the th subsystem

and , is the interaction input vector ofthe th subsystem and , is the systemmatrix of the th subsystem, is the control matrixof the th subsystem, is the interaction input gainmatrix of the th subsystem, and is the interactionmatrix between the th subsystem and the th subsystem.

Without loss of generality, it is assumed that and. Therefore, the large-scale system optimal control

problem to be considered can now be stated as follows: Finddiscrete control signals , for and

, such that the following objective functionis minimized while the dynamic equation defined by (1) andthe interaction equation defined by (2) are satisfied

(3)

(4)

Page 3: 466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

468 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO. 2, MARCH 2007

Fig. 1. Two-level hierarchical control structure.

where ( and )are positive–semidefinite matrices, and and

( and ) arepositive–definite matrices.

It is to be noted that, without any interactions between thesubsystems, a large-scale system control problem would degen-erate into several independent linear quadratic regulator (LQR)problems. Thus, the previous problem may be reexpressed in acomposite LQR form and, thus, it can be transformed into a con-ventional optimal control problem. This control problem mayfurther be resolved using the conventional linear quadratic (LQ)optimal control algorithms described in [33] and [34]. However,such a solution necessitates the use of a single central controller,but this may not be feasible for large-scale systems because thesolution of such a large-scale control problem requires the so-lution of a large order Riccati equation. In addition to this, asingle central controller may require more information about allthe states of the system which are usually not available. Thesedifficulties can be partially alleviated by a hierarchical controlscheme in which decentralized controllers optimize decoupledsubsystems while ignoring the interactions and a higher levelcoordinator minimizes the performance caused by the presenceof these interactions.

The proposed hierarchical optimization structure is shown inFig. 1. This structure is composed of lower level local opti-mization units and an upper level coordination unit, which are,respectively, responsible for the local optimization of localsubsystems and the coordination of the overall system.

Using conventional numerical methods, there are two fol-lowing repetitive steps for a hierarchical scheme:

• lower level local optimization: each local optimization unitperforms an optimization of its corresponding subsystemaccording to the coordinating command;

• upper level coordination: the coordination unit modifiesthe coordinating command according to the feedback in-formation received from the subsystems at the lower level.

Those two steps are iteratively performed by exchanging thecoordination information, resolving the subproblems and up-

dating the coordination commands until the overall optimal so-lution is achieved. It is to be noted that, in the conventional nu-merical algorithm on the digital computer, there always exists analternately waiting time during the coordination and local opti-mization processes; this is due to the serial nature which resultsin lower efficiency.

Partial differentials are used throughout the paper. For thesake of discussion, let the gradient vector and Hessian matrixof a scalar-valued function , whereand are, respectively, defined as

......

where the different components of a variable or a functionare represented by the appropriate superscripted indices. For avector-valued function , whereand , its Jacobian matrix is defined as

......

III. DESIGN OF THE NEURAL-NETWORK-BASED HIERARCHICAL

CONTROL FOR INTERCONNECTED

DYNAMIC SYSTEMS

A. Goal-Coordination-Based Hierarchical Control Scheme

The design of the neural-network-based hierarchical controlfor interconnected dynamic systems proposed in this paper isbased upon the goal-coordination method [2]. The fundamentalconcept behind the goal-coordination method is to transform theoriginal minimization problem into an easier maximization ofits dual problem by defining a Lagrange function.

Page 4: 466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

HOU et al.: RECURRENT NEURAL NETWORK FOR CONTROL OF DYNAMIC SYSTEMS 469

For this, first define a dual function

subject to (1) (5)

where the Lagrange function is defined by (6), as shown at thebottom of the page, where ( ) are theLagrange multipliers.

The Lagrange function is formed to transformthe original constrained optimization problem into an uncon-strained optimization problem. This is done by the introductionof the Lagrange multipliers ( ), which arechosen to force the satisfaction of the equality constraint de-fined by (2).

By the Lagrange duality theorem [35], the following holds:

(7)

This means that the minimization of the objective function de-fined in (3) with respect to , , and and subject to constraints(1) and (2), is equivalent to the maximization of the dual func-tion (5) with respect to . It can be observed that for a givenset of Lagrange multipliers ( ), the Lagrangefunction (6) is additively separable, and can, thus, be decom-posed into -independent sub-Lagrange functions, one for eachsubsystem

(8)

where (9), shown at the bottom of the page, holds. The neuralnetwork that is to be designed for the two-level hierarchical op-timization control must have the following functions.

• Local optimization for each subsystem (performed atthe lower level local optimization units): For( and ) given bythe coordination unit, the local optimization units main-tain their corresponding subsystems in optimal statesand provide optimal controls (and ) and optimal states( and ) at eachmoment.

• Coordination for the entire system (performed at the upperlevel coordination unit): For the feedback information from

local optimization units, and (and ), the coordination unit generatesnew coordinating commands ( and

) at each moment.A functional block diagram illustrating this two-level hierar-

chical structure and the information flows are shown in Fig. 2.

B. Design of the Hierarchical Neural Network

The optimization using a Hopfield-type recurrent neuralnetworks is to design a gradient system such that the gradientsystem converges to an equilibrium state which coincides withthe optimal solution of the original problem. Therefore, the sta-bility of the recurrent neural network may also be referred to asthe convergence to its equilibrium point during the computationprocess [9], [36].

In what follows, the local optimization subnetworks are firstconstructed, and then the coordination subnetwork. It is to benoted that the local optimization subnetworks and the coordina-tion subnetwork work in parallel with the objective to obtain anoptimal solution for the original problem.

In (5), if ( and )are taken as constants, then the task to be accomplished in thelocal optimization neural network is

subject to (1)

(10)

Note that the function of the coordination is to treat the in-teractions between subsystems; that is, to maintain the large-scale system at the interaction balance satisfying (2). Therefore,the terms ( and ) and

( and ) in the con-straints (1) can be taken as the input variables of the subsystems,and the local optimization is to find and whileminimizing the subobjective function (4); that is

subject to (1)

(11)

Consequently, the local optimization subnetworks for the hier-archical optimization of large-scale systems are given by

(12)

(13)

(6)

(9)

Page 5: 466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

470 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO. 2, MARCH 2007

Fig. 2. Hierarchical architecture of the proposed neural network.

where and are positive diagonalmatrices of the time constants that control the speed of conver-gence, , and . The smallerthese time constants are, the higher the computation speed is.

For simplification, let

(14)

and

(15)

Then

(16)

Similarly

(17)

The summations in the last terms of (16) and (17) can be sim-plified by defining a new variable (and ), in reverse time steps; that is

(18)

(19)

Thus, by nesting recursively in , (16) and (17)can, respectively, be reexpressed as

(20)

(21)

Substituting (1), (14), and (15) in (18) and (19) yields

(22)

(23)

Subsequently

(24)

From (1), (15) and (14), one obtains

(25)

Page 6: 466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

HOU et al.: RECURRENT NEURAL NETWORK FOR CONTROL OF DYNAMIC SYSTEMS 471

Fig. 3. Functional block diagram of the local optimization subnetworks defined by (25) and (27).

In a similar way, the following equations can be derived:

(26)

(27)Equations (25) and (27) thus define the local optimization

subnetworks.Turning now to the coordination subnetwork, its task is to co-

ordinate the interactions between the subsystems so as to main-tain the entire system at an optimal level. By the duality theorem

and (7), the coordination subnetwork is given by

(28)

where ( ) are positive diagonalmatrices of the time constants that control the speed of conver-gence.

Hence, the local optimization subnetworks defined by (25)and (27) and the coordination subnetwork defined by (28) makeup the complete neural network for the dynamic hierarchical op-timization and control of the interconnected dynamic systems.

Page 7: 466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

472 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO. 2, MARCH 2007

Equations (25), (27), and (28) are called the decision variableneurons, interaction variable neurons, and Lagrange multiplierneurons, respectively.

IV. SCHEMATIC DIAGRAM OF THE NEURAL NETWORK

The hierarchical architecture of the proposed neural networkis shown as Fig. 2. Figs. 3 and 4 are, respectively, the functionalblock diagrams of the local optimization subnetworks and thecoordination subnetwork.

It is noted that, for ease of understanding and notational con-venience, the neural network is divided into blocks or subnet-works; actually, they are one and indivisible in performing thehierarchical optimization process. Moreover, the constant ma-trices , , and in (25), (27), and (28) can be set to beeither the same value or different values, which affects only thecomputation speed. Thus, in Figs. 3 and 4, the constant matricesare illustrated simply by .

The advance of the VLSI technology makes the implementa-tion and applications of neural network methods much more fea-sible and affordable. The solution to an optimal control problemwill be provided with extraordinarily high speed when the neuralnetwork is implemented by an analog circuit [9]–[12].

V. EXTENSION TO THE CASE WITH BOUNDED

CONTROL INPUTS

In practical applications, the control inputs of the subsystemsmay have physical limits, such as mechanical limits, actuatorsaturations, voltage, and current limits. In this section, the con-trol signals with boundary limits of the following form are con-sidered:

(29)where and are, respectively, the lower and upperbounds of control input .

In this case, the large-scale system optimal control problemwith bounded control inputs can be restated to find discretecontrol signals such that the objective function defined by(3) and (4) is minimized while satisfying (1), (2), and (29). Ac-cordingly, the augmented Lagrange function is defined as (30),shown at the bottom of the page, where is the penalty

parameter and and : are de-fined by

ifif (31)

ifif

(32)

Thus, the augmented sub-Lagrange function is formulated as(33), shown at the bottom of the page. Then

(34)

Therefore, the decision variable neurons defined by (12) in localoptimization subnetworks will be replaced by the following dy-namic equation:

(35)

(36)

The equations defined by (36), (27), and (28) make up theneural network for the dynamic hierarchical optimization andcontrol of the interconnected dynamic systems with boundedcontrol inputs.

(30)

(33)

Page 8: 466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

HOU et al.: RECURRENT NEURAL NETWORK FOR CONTROL OF DYNAMIC SYSTEMS 473

Fig. 4. Functional block diagram of the coordination subnetwork defined by (28).

VI. EXAMPLE

The example to be presented in this section is an intercon-nected large-scale system having three dynamic subsystems.The case without control input bounds will be given first, andthen the subsystems with control limits will be addressed.

The dynamic equations and the interaction equations of threeinterconnected subsystems are, respectively, as follows:

(37)

is given

(38)

where

and ,, and

are given.The objective of hierarchical control is to minimize the fol-

lowing performance function of the overall system:

(39)

(40)

where ,, ,

, and.

It can be seen that and represent the interactions be-tween subsystems 1 and 2, and 1 and 3. The interactions fromsubsystems 2 and 3 will be imposed on subsystem 1 by way ofthe gain matrix . The larger the elements of , , and ,the larger the interactions imposedon subsystem 1 by subsystems2 and 3 are. These remarks apply to all the other ’s and ’s.

Page 9: 466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

474 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO. 2, MARCH 2007

Fig. 5. Optimal states of the three subsystems. (a) Subsystem 1. (b) Subsystem 2. (c) Subsystem 3.

Fig. 6. Optimal controls of the three subsystems.

By using the proposed neural-network-based hierarchicalcontrol approach, simulation results were obtained, some ofwhich are given in Figs. 5–7. Figs. 5 and 6 show, respectively,the optimal states and optimal controls of the three subsystems.The evolution processes of the control variables are shown inFig. 7. The time constants are set as .Computational comparisons are made between the proposedneural network and the traditional Ricatti equation, and theresults are shown in Fig. 8.

Next, the case with control input limits will be considered

(41)where and , and , and

and .Simulation results for the interconnected subsystems with

bounded control inputs were obtained using the equations de-fined by (36), (27), and (28). Figs. 9 and 10 show, respectively,

Page 10: 466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

HOU et al.: RECURRENT NEURAL NETWORK FOR CONTROL OF DYNAMIC SYSTEMS 475

Fig. 7. Evolution processes of optimal controls. (a) Subsystem 1. (b) Subsystem 2. (c) Subsystem 3.

Fig. 8. Computational difference between the proposed method and the Riccatti equation.

the optimal states and optimal controls of the three subsystems.The penalty parameter is set as 1000.

These results show that the proposed neural-network-basedhierarchical controller performed satisfactorily in large-scalesystem optimizations.

VII. CONCLUSION

In this proposed approach for controlling large-scale inter-connected systems, by correspondingly nesting the dynamicequations and the initial conditions of the subsystem into alocal optimization neural network, the difficulty of treatingconstraints of the subsystem dynamic equations are not onlyovercome, but the dimension of the neural network is alsodecreased. Furthermore, the local optimization subnetworksand the coordination subnetwork of which the hierarchicalneural network is composed can work simultaneously to givean optimal solution to the original problem. This makes the pro-posed neural network method superior to numerical algorithmsin time-critical applications.

Although only a few algorithms have been proposed for theoptimal control of interconnected dynamic systems [1], [2],the main difficulty with almost all of those hierarchical controlschemes is that the verification of their convergence is not yeta set issue [2], [3]. This present approach may be particularlyuseful for large-scale interconnected dynamic systems wherea centralized approach is not able to find a solution in anacceptable time.

It has been proved in Theorem 1 that the control horizoncan be very large or small without damaging the stability ofthe neural network. Also, due to the parallel computing natureof neural networks, the increase of the control horizon haslittle effect on the computational burden. However, if the controlhorizon becomes infinite, the problem will be untractable bythe proposed method.

Finally, it is worth mentioning that although the derivationand proof of the proposed neural network appear complicated,the application of the model is neat and straightforward due toits nesting structure.

APPENDIX

STABILITY ANALYSIS

In order to study the stability of the proposed neural networkcontrol of interconnected dynamic systems, Lemma 1 is firstgiven.

Lemma 1: If is a positive–semidefinitematrix and , then is apositive–semidefinite matrix.

Proof: , holds.Since is positive–semidefinite, or

. Thus, it can be obtained that ispositive–semidefinite.

The stability of the proposed neural network is then stated asfollows.

Page 11: 466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

476 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO. 2, MARCH 2007

Fig. 9. Optimal states of the three subsystems in the presence of control limits. (a) Subsystem 1. (b) Subsystem 2. (c) Subsystem 3.

Fig. 10. Optimal controls of the three subsystems in the presence of control limits.

Theorem 1: The neural network defined by (25), (27), and(28) is globally stable; that is, it converges to a stable state cor-responding to the initial state of the original system regardlessof the initial state of the neural network.

Proof: For the proposed neural network, define a Lyapunovfunction given by

(42)

where

(43)

Differentiating the Lyapunov function with respect to timeyields

(44)

Page 12: 466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

HOU et al.: RECURRENT NEURAL NETWORK FOR CONTROL OF DYNAMIC SYSTEMS 477

where

(45)

and

(46)

(47)

(48)

Substituting (46)–(48) and (25), (27), and (28) in (45) obtains

(49)

where the following scalar equations relating the terms in theright side of (49) hold:

(50)

(51)

(52)

Then

(53)

Substituting (25) and (27) in (53) yields

(54)

From (22) and (23)

Page 13: 466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

478 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO. 2, MARCH 2007

Subsequently

(55)

Similarly

(56)

Therefore, (57) can be obtained

(57)

Let

(58)

Then, (57) may be reexpressed as

(59)

where denotes null matrices with appropriate dimensions, butdoes not necessarily mean square ones.

Since

by Lemma 1, it is observed that is positive–semidefinite.Therefore

is positive–semidefinite. As well, by Lemma 1, it can be ob-tained that

is positive–semidefinite.Since

is positive–definite, then

is positive–definite. Obviously

is positive–definite. Hence

(60)

Page 14: 466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

HOU et al.: RECURRENT NEURAL NETWORK FOR CONTROL OF DYNAMIC SYSTEMS 479

or

(61)

The equality in (61) holds if and only if

(62)

for all and .Therefore, the proposed neural network is globally stable and

(62) determines its sole stable state. This derivation shows thatthe stable state of the neural network coincides with the optimalsolution to the original optimization problem.

Theorem 2: The neural network defined by (36), (27), and(28) is globally stable; that is, it converges to a stable state cor-responding to the initial state of the original system regardlessof the initial state of the neural network.

Proof: The proof of Theorem 2 is similar to that of The-orem 1. The differences are caused by the addition of the penaltyterms, which are given as follows.

The definition of the Lyapunov function is the same as thatdefined by (42) and (43). However, the time differentiation ofthe sub-Lyapunov function is reexpressed as

(63)

If holds,. The neural network defined by (36), (27), and (28) is simpli-

fied to the case defined by (25), (27), and (28).If does not hold, and

are convex and twice differentiable; accordingto [37], the Hessian matricesand are positive–semidefinite.

Hence, the stability of the proposed neural network definedby (36), (27), and (28) is proved.

The optimality analysis of the penalty function-based neuralnetworks for inequality constrained optimization problemsgiven in [38] and [39] is applicable to the neural networkdefined by (36), (27), and (28); that is, if is sufficiently large,at the equilibrium point, the neural network defined by (36),(27), and (28) fulfills the Kuhn–Tucker optimality conditionwith and approaching the Lagrangemultipliers associated with the penalty terms.

ACKNOWLEDGMENT

The authors would like to thank the anonymous reviewers andthe Associate Editor for their valuable comments and sugges-tions on revising this paper.

REFERENCES

[1] M. S. Mahmoud, M. M. Sabry, and S. G. Foda, “A new approach tofuzzy control of interconnected systems,” SAMS, vol. 42, no. 11, pp.1623–1637, 2002.

[2] M. Jamshidi, Large-Scale Systems: Modeling, Control and FuzzyLogic. Englewood Cliffs, New Jersey: Prentice-Hall, 1996.

[3] ——, Large-Scale Systems: Modeling and Control. Amsterdam, TheNetherlands: North-Holland, 1983.

[4] M. G. Singh, Dynamical Hierarchical Control, rev. ed. Amsterdam,The Netherlands: North Holland, 1980.

[5] M. K. Sundareshan, “Large-scale discrete systems: A two-level opti-mization scheme,” Int. J. Syst. Sci., vol. 7, no. 8, pp. 901–909, 1976.

[6] J. J. Hopfield and D. W. Tank, “‘Neural’ computation of decisions op-timization problems,” Biol. Cybern., vol. 52, no. 3, pp. 141–152, 1985.

[7] D. W. Tank and J. J. Hopfield, “Simple ’neural’ optimization network:An A/D converter, signal decision circuit and a linear programmingcircuit,” IEEE Trans. Circuits Syst., vol. 33, no. 5, pp. 533–541, May1986.

[8] M. M. Gupta, L. Jin, and N. Homma, Static and Dynamic NeuralNetworks: From Fundamentals to Advanced Theory. Hoboken, NJ:Wiley, 2003.

[9] A. Cichocki and R. Unbehauen, Neural Networks for Optimization andSignal Processing. Chichester, U.K.: Wiley, 1993.

[10] M. Verleysen and P. Jespers, “An analog VLSI implementation of Hop-field’s neural network,” IEEE Micro, vol. 9, no. 6, pp. 46–55, Nov.1989.

[11] S. P. Eberhardt, R. Tawel, T. X. Brown, T. Daud, and A. P. Thakoor,“Analog VLSI neural networks: Implementation issues and examplesin optimization and supervised learning,” IEEE Trans. Ind. Electron.,vol. 39, no. 6, pp. 552–564, Dec. 1992.

[12] T. Asai, Y. Kanazawa, and Y. Amemiya, “A subthreshold MOS neuroncircuit based on the Volterra system,” IEEE Trans. Neural Netw., vol.14, no. 5, pp. 1308–1312, Sep. 2003.

[13] S. Zhang and A. G. Constantinides, “Lagrange programming neuralnetworks,” IEEE Trans. Circuits Syst. II, Analog Digit. Signal Process.,vol. 39, no. 7, pp. 441–452, Jul. 1992.

[14] Y. Leung, K. Chen, Y. Jiao, X. Gao, and K. S. Leung, “A new gradient-based neural network for solving linear and quadratic programmingproblems,” IEEE Trans. Neural Netw., vol. 12, no. 5, pp. 1074–1083,Sep. 2001.

[15] F. Araujo, B. Ribeiro, and L. Rodrigues, “A neural network forshortest path computation,” IEEE Trans. Neural Netw., vol. 12, no. 5,pp. 1067–1073, Sep. 2001.

[16] R. Perfetti and E. Ricci, “Analog neural network for support vectormachine learning,” IEEE Trans. Neural Netw., vol. 17, no. 4, pp.1085–1091, Jul. 2006.

Page 15: 466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

480 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO. 2, MARCH 2007

[17] A. D. Blas, A. Jagota, and R. Hughey, “Energy function-based ap-proaches to graph coloring,” IEEE Trans. Neural Netw., vol. 13, no.1, pp. 81–91, Jan. 2002.

[18] W. J. Li and T. Lee, “Hopfield neural networks for affine invariantmatching,” IEEE Trans. Neural Netw., vol. 12, no. 6, pp. 1400–1410,Nov. 2001.

[19] Y. Zhang and J. Wang, “Global exponential stability of recurrent neuralnetworks for synthesizing linear feedback control systems via pole as-signment,” IEEE Trans. Neural Netw., vol. 13, no. 3, pp. 633–644, May2002.

[20] C. P. Wu and X. Huang, “Novel neural networks models for optimalcontrol problems,” Control Eng. Practice, vol. 2, no. 3, pp. 587–590,1994.

[21] W. S. Tang and J. Wang, “Two recurrent neural networks for localjoint torque optimization of kinetically redundant manipulators,” IEEETrans. Syst., Man, Cybern. B, Cybern., vol. 30, no. 1, pp. 120–128, Feb.2000.

[22] S. Xie, J. Huang, C. Zhao, and Z. Xu, “Application of neural networkto hierarchical optimal control of the class of continuous time-varyinglarge-scale systems,” in Proc. IEEE Int. Conf. Intell. Process. Syst.,Beijing, China, 1997, pp. 477–481.

[23] Z. G. Hou and C. P. Wu, “Simultaneous local optimization and coordi-nation of dynamical large-scale systems,” in Proc. IEEE Int. Conf. Ind.Technol., Shanghai, China, 1996, pp. 554–558.

[24] S. Huang, K. K. Tan, and T. H. Lee, “Decentralized control designfor large-scale systems with strong interconnections using neural net-works,” IEEE Trans. Autom. Control, vol. 48, no. 5, pp. 805–810, May2003.

[25] F. Nardi, N. Hovakimyan, and A. J. Calise, “Decentralized control oflarge-scale systems using single hidden layer neural networks,” in Proc.Amer. Control Conf., Arlington, VA, 2001, vol. 4, pp. 3122–3127.

[26] T. P. Zhang, J. D. Mei, H. B. Jiang, and Y. Yi, “Decentralized directadaptive neural network control of interconnected systems,” in Proc.Int. Conf. Mach. Learn. Cybern., Shanghai, China, 2004, vol. 2, pp.856–860.

[27] W. Liu, S. Jagannathan, D. C. Wunsch, II, and M. L. Crow, “Decen-tralized neural network control of a lass of large-scale systems withunknown interconnections,” in Proc. IEEE Int. Conf. Decision Con-trol, Atantis, Bahamas, 2004, pp. 4972–4977.

[28] A. Di Febbraro, T. Parisini, S. Sacone, and R. Zoppoli, “Neural ap-proximations for feedback optimal control of freeway systems,” IEEETrans. Veh. Technol., vol. 50, no. 1, pp. 302–313, Jan. 2001.

[29] C. S. Leung and L. W. Chan, “Dual extended Kalman filtering in recur-rent neural networks,” Neural Netw., vol. 16, no. 2, pp. 223–239, 2003.

[30] L. Liao, “A recurrent neural network for N-stage optimal control prob-lems,” Neural Process. Lett., vol. 10, no. 3, pp. 195–200, 1999.

[31] Z. G. Hou, C. Wu, and P. Bao, “A neural network for hierarchical op-timization of nonlinear large-scale systems,” Int. J. Syst. Sci., vol. 29,no. 2, pp. 159–166, 1998.

[32] Z. G. Hou, “A hierarchical optimization neural network for large-scaledynamic systems,” Automatica, vol. 37, no. 12, pp. 1931–1940, 2001.

[33] B. D. O. Anderson and J. B. Moore, Optimal Control: Linear QuadraticMethods. Englewood Cliffs, NJ: Prentice-Hall, 1990.

[34] F. L. Lewis, Optimal Control. New York: Wiley, 1986.[35] D. P. Bertsekas, Nonlinear Programming, 2nd ed. Belmont, MA:

Athena Scientific, 1999.[36] H. T. Siegelmann and S. Fishman, “Analog computation with dynam-

ical systems,” Physica D, vol. 120, no. 1–2, pp. 214–235, 1998.[37] D. G. Luenberger, Linear and Nonlinear Programming, 2nd

ed. Reading, MA: Addison-Wesley, 1984.[38] C. Y. Maa and M. A. Shanblatt, “Linear and quadratic programming

neural network analysis,” IEEE Trans. Neural Netw., vol. 3, no. 4, pp.580–594, Jul. 1992.

[39] ——, “A two-phase optimization neural network,” IEEE Trans. NeuralNetw., vol. 3, no. 6, pp. 1003–1009, Nov. 1992.

Zeng-Guang Hou (M’05) received the B.E. andM.E. degrees in electrical engineering from YanshanUniversity (formerly North-East Heavy MachineryInstitute), Qinhuangdao, China, in 1991 and 1993,respectively, and the Ph.D. degree in electricalengineering from Beijing Institute of Technology,Beijing, China, in 1997.

From May 1997 to June 1999, he was a Postdoc-toral Research Fellow at the Laboratory of Systemsand Control, Institute of Systems Science, ChineseAcademy of Sciences, Beijing. He was a Research

Assistant at the Hong Kong Polytechnic University, Hong Kong SAR, China,from May 2000 to January 2001. From July 1999 to May 2004, he was an As-sociate Professor at the Institute of Automation, Chinese Academy of Sciences,and has been a Full Professor since June 2004. From September 2003 to Oc-tober 2004, he was a Visiting Professor at the Intelligent Systems ResearchLaboratory, College of Engineering, University of Saskatchewan, Saskatoon,SK, Canada. He has published over 60 papers in journals and conference pro-ceedings. His current research interests include neural networks, optimizationalgorithms, robotics, and intelligent control systems.

Dr. Hou is an Associate Editor of the IEEE COMPUTATIONAL INTELLIGENCE

MAGAZINE, and an Editorial Board Member of the International Journal ofIntelligent Systems Technologies and Applications (IJISTA). He is a GuestEditor for special issues of the International Journal of Vehicle AutonomousSystems on Computational Intelligence and Its Applications to Mobile Robotsand Autonomous Systems and for Soft Computing (Springer) on Fuzzy-NeuralComputation and Robotics. He also served as Publicity Co-Chair of the IEEEWorld Congress on Computational Intelligence (WCCI) held in Vancouver,BC, Canada, in July 2006. He was/is a program member of several prestigiousconferences, and he is serving as Program Chair of the 2007 InternationalSymposium on Neural Networks.

Madan M. Gupta (M’63–SM’76–F’90–LF’02)received the B.E. (honors) and M.E. degrees in elec-tronics–communications engineering from the BirlaEngineering College, Pilani, India, in 1961 and 1962,respectively and the Ph.D. degree in adaptive controlsystems from the University of Warwick, Warwick,U.K., in. In 1998, for his extensive contributionsin neurocontrol, neurovision, and fuzzy-neuralsystems, he received an earned Doctor of Science(D.Sc.) degree from the University of Saskatchewan,Saskatoon, SK, Canada.

He is a Professor Emeritus in the College of Engineering and Director ofthe Intelligent Systems Research Laboratory at University of Saskatchewan. Heauthored or coauthored over 800 published research papers. He recently coau-thored the seminal book Static and Dynamic Neural Networks: From Funda-mentals to Advanced Theory (New York: Wiley/IEEE Press, 2003). He previ-ously coauthored Introduction to Fuzzy Arithmetic: Theory and Applications,the first book on fuzzy arithmetic, and Fuzzy Mathematical Models in Engi-neering and Management Science. Both of these books had Japanese trans-lations. He also coedited 19 books in the fields of adaptive control systems,fuzzy computing, neurocomputing, neurovision, and neurocontrol systems. Hiscurrent research interests are in the areas of neurovision and neurocontrol sys-tems, integration of fuzzy-neural systems, neuronal morphology of biologicalsystems, intelligent and cognitive robotic systems, new paradigms in informa-tion processing, and chaos in neural systems. He is also developing some newcomputational neural architectures and computational fuzzy neural networks forapplication to advanced robotics, aerospace, and industrial systems.

Dr. Gupta was elected Fellow of the IEEE for his contributions to the theoryof fuzzy sets and adaptive control systems and for the advancement of the diag-nosis of cardiovascular disease, Fellow of the International Society for OpticalEngineering (SPIE) for his contributions to the field of neurocontrol and neu-rofuzzy systems, and Fellow of the International Fuzzy Systems Association(IFSA) for his contributions to fuzzy-neural systems. In 1991, he was corecip-ient of the Institute of Electrical Engineering Kelvin Premium. In 1998, he re-ceived the Kaufmann Prize Gold Medal for Research in the field of fuzzy logic.He has been elected as a Visiting Professor and Special Advisor in the area ofhigh technology to the European Centre for Peace and Development (ECPD),University for Peace, which was established by the United Nations.

Peter N. Nikiforuk received the B.Sc. degree in engi-neering physics from Queen’s University, Kingston,ON, Canada, in 1952 and the Ph.D. degree in elec-trical engineering and D.Sc. degree for research incontrol systems from Manchester University, Man-chester, U.K., in 1955 and 1970, respectively.

He is a Dean Emeritus, College of Engineering,the University of Saskatchewan, Saskatoon, SK,Canada. Previously, he was the Dean of Engineeringfor 23 years, Head of Mechanical Engineering forseven years, and Chair of the Division of Control

Page 16: 466 IEEE TRANSACTIONS ON NEURAL NETWORKS, …homepage.usask.ca/~mmg864/paper/RJ/RJ-116.pdf · A Recurrent Neural Network for Hierarchical Control of Interconnected Dynamic Systems

HOU et al.: RECURRENT NEURAL NETWORK FOR CONTROL OF DYNAMIC SYSTEMS 481

Engineering for five years. Prior to this, he worked in the defense industryin Canada and the United States. His fields of research are adaptive andelectrohydraulic control systems.

Dr. Nikiforuk served as Chair or member of five Councils in Canada, wasrecipient of seven Fellowships in Canada and England and seven other honors,and member of several Boards.

Min Tan received the B.S. degree in control engi-neering from Tsinghua University, Beijing, China, in1986 and the Ph.D. degree in control theory and con-trol engineering from Institute of Automation, Chi-nese Academy of Sciences, Beijing, in 1990.

He is a Professor in the Laboratory of Com-plex Systems and Intelligent Science, Institute ofAutomation, Chinese Academy of Sciences. Hisresearch interests include advanced robot control,multirobot systems, biomimetic robots, and manu-facturing systems.

Long Cheng received the B.S. degree (with honors)in control engineering from Nankai University,Tianjin, China, in July 2004. He is currently workingtowards the Ph.D. degree at the Institute of Automa-tion, Chinese Academy of Sciences, Beijing, China.

His current research interests include neural net-works, optimization, nonlinear control, and their ap-plications to robotics.