9
Computers and Chemical Engineering 29 (2005) 2078–2086 Iterative ant-colony algorithm and its application to dynamic optimization of chemical process Bing Zhang a,b , Dezhao Chen a,, Weixiang Zhao c a Department of Chemical Engineering, Zhejiang University, Hangzhou 310027, China b Automation Institute, East China University of Science and Technology, Shanghai 200237, China c Center for Air Resources Engineering and Science (CARES), Clarkson University, Potsdam, NY 13699-5708, USA Received 30 June 2004; received in revised form 15 April 2005; accepted 26 May 2005 Available online 6 September 2005 Abstract For solving dynamic optimization problems of chemical process with numerical methods, a novel algorithm named iterative ant-colony algorithm (IACA), the main idea of which was to iteratively execute ant-colony algorithm and gradually approximate the optimal control profile, was developed in this paper. The first step of IACA was to discretize time interval and control region to make the continuous dynamic optimization problem be a discrete problem. Ant-colony algorithm was then used to seek the best control profile of the discrete dynamic system. At last, the iteration based on region reduction strategy was employed to get more accurate results and enhance robustness of this algorithm. Iterative ant-colony algorithm is easy to implement. The results of the case studies demonstrated the feasibility and robustness of this novel method. IACA approach can be regarded as a reliable and useful optimization tool when gradient is not available. © 2005 Published by Elsevier Ltd. Keywords: Dynamic optimization; Ant-colony algorithm; Iterative ant-colony algorithm 1. Introduction Chemical process is often described by a group of very complex nonlinear differential equations. Dynamic optimiza- tion, which is often employed for the chemical industry pro- ductions and managements, is to make a performance index optimal by controlling operation variables (Emil & Jens, 2001; Eva et al., 2001; Kvamsdal et al., 1999). A typi- cal dynamic optimization problem of continuous process is described as follows (Roubos et al., 1997). min J (u) = Φ(x(t f ),t f ) + t f 0 Ψ (x(t ), u(t ),t )dt (1) s.t. dx dt = f (x(t ), u(t ),t ) Corresponding author. Tel.: +1 401 874 2814; fax: +1 401 874 4689. E-mail address: [email protected] (D. Chen). x(0) = x 0 ; u min u(t ) u max where x and u are, respectively, called state variables and control variables. Several methods to solve the dynamic optimization prob- lem have been reported in the literatures. In gradient algo- rithms based on the Hamiltonian function (Roubos et al., 1997), firstly the time interval is divided into a number of stages and then for each time stage the local gradient of the objective function with respect to changes in the values of the control variables is calculated. Subsequently, the local sensi- tivities are used to adjust the control trajectories in order to improve the objective function. Gradient algorithms proved to be reliable (Roubos, van Straten, & van Boxtel, 1999), but the gradient computing is not an easy job and moreover, the gradient is not available at all times. More importantly, using these algorithms one can only find a local optimum in many cases. 0098-1354/$ – see front matter © 2005 Published by Elsevier Ltd. doi:10.1016/j.compchemeng.2005.05.020

Iterative ant-colony algorithm and its application to dynamic optimization of chemical process

Embed Size (px)

Citation preview

Page 1: Iterative ant-colony algorithm and its application to dynamic optimization of chemical process

Computers and Chemical Engineering 29 (2005) 2078–2086

Iterative ant-colony algorithm and its application to dynamicoptimization of chemical process

Bing Zhanga,b, Dezhao Chena,∗, Weixiang Zhaoc

a Department of Chemical Engineering, Zhejiang University, Hangzhou 310027, Chinab Automation Institute, East China University of Science and Technology, Shanghai 200237, China

c Center for Air Resources Engineering and Science (CARES), Clarkson University, Potsdam, NY 13699-5708, USA

Received 30 June 2004; received in revised form 15 April 2005; accepted 26 May 2005Available online 6 September 2005

Abstract

For solving dynamic optimization problems of chemical process with numerical methods, a novel algorithm named iterative ant-colonyalgorithm (IACA), the main idea of which was to iteratively execute ant-colony algorithm and gradually approximate the optimal controlprofile, was developed in this paper. The first step of IACA was to discretize time interval and control region to make the continuous dynamico dynamics ness of thisa stness oft©

K

1

ctdo2cd

m

s

nd

b-go-

fthe

thensi-to

ed

thengny

0d

ptimization problem be a discrete problem. Ant-colony algorithm was then used to seek the best control profile of the discreteystem. At last, the iteration based on region reduction strategy was employed to get more accurate results and enhance robustlgorithm. Iterative ant-colony algorithm is easy to implement. The results of the case studies demonstrated the feasibility and robu

his novel method. IACA approach can be regarded as a reliable and useful optimization tool when gradient is not available.2005 Published by Elsevier Ltd.

eywords: Dynamic optimization; Ant-colony algorithm; Iterative ant-colony algorithm

. Introduction

Chemical process is often described by a group of veryomplex nonlinear differential equations. Dynamic optimiza-ion, which is often employed for the chemical industry pro-uctions and managements, is to make a performance indexptimal by controlling operation variables (Emil & Jens,001; Eva et al., 2001; Kvamsdal et al., 1999). A typi-al dynamic optimization problem of continuous process isescribed as follows (Roubos et al., 1997).

inJ(u) = Φ(x(tf ), tf ) +∫ tf

0Ψ (x(t), u(t), t) dt (1)

.t.dx

dt= f (x(t), u(t), t)

∗ Corresponding author. Tel.: +1 401 874 2814; fax: +1 401 874 4689.E-mail address: [email protected] (D. Chen).

x(0) = x0; umin ≤ u(t) ≤ umax

wherex and u are, respectively, called state variables acontrol variables.

Several methods to solve the dynamic optimization prolem have been reported in the literatures. In gradient alrithms based on the Hamiltonian function (Roubos et al.,1997), firstly the time interval is divided into a number ostages and then for each time stage the local gradient ofobjective function with respect to changes in the values ofcontrol variables is calculated. Subsequently, the local setivities are used to adjust the control trajectories in orderimprove the objective function. Gradient algorithms provto be reliable (Roubos, van Straten, & van Boxtel, 1999), butthe gradient computing is not an easy job and moreover,gradient is not available at all times. More importantly, usithese algorithms one can only find a local optimum in macases.

098-1354/$ – see front matter © 2005 Published by Elsevier Ltd.oi:10.1016/j.compchemeng.2005.05.020

Page 2: Iterative ant-colony algorithm and its application to dynamic optimization of chemical process

B. Zhang et al. / Computers and Chemical Engineering 29 (2005) 2078–2086 2079

Nomenclature

C the initial values of pheromone densitiesi ith time stageJ performance indexk the number of iterationm total number of antsn the predefined number of time interval dividedNC the upper limit of cycle number in each ACA

iterationp the predefined number of control searching

region dividedq a constanttf the final timeu control variablesu0 baseline of searching regionu* the optimal control profileumax the upper limit of feasible regionumin the lower limit of feasible regionw width of searching regionx state variables

Greek symbolsρ reduction ratio of searching regionτ pheromone density� residue ratio of pheromone

Dynamic programming (DP) was developed based onBellman’s principle of optimality (Chen, Sun, & Chang,2002) and has been proved to be a feasible method whenthe gradient based on the Hamiltonian function is difficultto compute. Both time interval and control variables are dis-cretized to a predefined number of values. Then, a systematicbackward search method in cooperation with system simu-lation models is used to find the optimal path through thedefined grid. To have a reasonable result, DP needs a largenumber of grid values for the state variables and the con-trol variables. Therefore, numerous integrations are neededat each time stage. Obviously, with an increase on dimensionsof a concerned problem, the well-knowncurse of dimension-ality will become unavoidable.

To avoid this difficulty, iterative dynamic programming(IDP), a modified DP, was proposed. Overcoming the prob-lem of curse of dimensionality through the usage of coarsegrid points and region-reduction strategy, IDP not only pro-motes the efficiency of computation but also increases numer-ical accuracy. It should be noted that the basic principle of IDPis still to optimize one time stage in turn instead of optimizingall stages simultaneously (Luus, 1993; Dadebo & Mcauley,1995; Rusnak et al., 2001). This principle is just the sameas that of DP and the approach also needs discretization ofstates variables just as DP does. When there are more statevariables, it will still be troublesome.

Genetic algorithm (GA) has become very popular for non-linear dynamic optimization in recent years (Pham, 1998;Roubos et al., 1999). Roubos et al. used a real-coded chromo-some to represent a feasible control profile. With cross-overand mutation operators, control profiles at all time stages areoptimized simultaneously. This is its advantage over IDP.Continuous ant-colony algorithm (CACA), the integration ofa derivation of ACA and GA, also has this advantage (Rajesh,Gupta, & Kusumakar, 2001). Nevertheless searching opti-mum in continuous regions using either GA or CACA istroublesome.

Comparison of stochastic algorithm (GA or CACA) andIDP for dynamic optimization will give some suggestions.The advantage of IDP lies in discretization of control regionand iteration process that make a complex continuous prob-lem be tandem and simple discrete problem. Anyway search-ing among finite candidates is easier and simpler than ina continuous region. The merit of stochastic algorithm isthat control profile at all time stages is optimized simul-taneously, and also it is easier to compute states variablesand performance index. That is to say, stochastic algo-rithm does not need discretization of states variables. So theintegration of iteration and stochastic algorithm is a goodchoice.

In this paper, a novel algorithm named iterative ant-colonyalgorithm (IACA) is developed, the main idea of which ist llya -c atesv pti-m ree ma bil-i ousc

2

,p useo ext ac-i ,2 sh nalpMI ialo

ya n as nd-i ce.E hef

o iteratively execute ant-colony algorithm and graduapproximate the optimal control profile. IACA is more conise than IDP because it does not need discretization of stariables and control profiles at all time stages can be oized simultaneously. It is easy to implement and is mofficient than GA and CACA because of searching optimumong finite candidates. IACA has demonstrated its feasi

ty and robustness with the successful applications to variase studies.

. An overview on ant-colony algorithm (ACA)

Ant-colony, in spite of the simplicity of their individualsresents a highly structured social organization. Becaf this organization, ant-colony can accomplish compl

asks that in some cases far exceed the individual capties of a single ant (Dorigo, Bonabeau, & Theraulaz000). An analogy to the way that ant-colony functionas suggested the definition of a new computatioaradigm, which is named as ant-colony algorithm (Dorigo,aniezzo, & Colorni, 1996; Dorigo & Gambardella, 1997).

t is a new viable approach for stochastic combinatorptimization.

Ant-colony algorithm by Dorigo was firstly successfullpplied to the traveling salesman problem (TSP). Giveet of towns, the TSP can be stated as the problem of fing a minimal length closed tour that visits each town onach ant in ant-colony algorithm is a simple agent with t

ollowing characteristics:

Page 3: Iterative ant-colony algorithm and its application to dynamic optimization of chemical process

2080 B. Zhang et al. / Computers and Chemical Engineering 29 (2005) 2078–2086

• it chooses the next town to go to with a probability whichis a function of the distance between the towns and theamount of pheromone on the edges;

• it forces the ant to make legal tours, that is to say transi-tions to already visited towns are disallowed until a tour iscompleted;

• when it completes a tour, it lays a substance calledpheromone on each edge visited.

After each ant has completed a tour (i.e., a cycle), thepheromone intensity on all edges is updated, includingevaporation and accumulation of pheromone. The updatedpheromone intensity will influence the behaviors of ants inthe next cycle. The more traffic on an edge, the more attrac-tive the edge is, which implements an autocatalytic process.It can be seen that the main characteristics of this algorithmare positive feedback and distributed computation.

Successful application to TSP of ant-colony algorithmattracted great attention of researchers. The new heuristichas the following desirable merits: (i) it is versatile; (ii) it isrobust; (iii) it is a population-based approach. It was then usedfor just-in-time (JIT) sequencing problem, frequency assign-ment problem, and quadratic assignment problem (Maniezzo& Carbonaro, 2000; Patrick & McMullen, 2000; Talbi et al.,2001). Based on the efforts of Dorigo, MAX–MIN ant sys-tem and graph-based ant system were developed (Stutzle &H ofA

ena tems( g(

, ad eaotw earp uslya

3

fI vala oft and( exti

3

,sa hisd uousp

For dynamic optimization with single control variable, letu0(k)

i andw(k), respectively, represent baseline and width ofsearching region in thekth step of iteration. At the begin ofiteration, that is when (k = 0), w(0) andu0(0)

i are calculatedwith Eqs.(2) and(3).

w(0) = umax − umin (2)

u0(0)i = (umax + umin)

2, i = 1, 2, . . . , n (3)

whereumax andumin are the upper limit and lower limit ofcontrol feasible region. During iterationsw(k) andu0(k)

i (k =1, 2, . . .) will be changed and decreased timely in order toprevent the deviation caused by discretization. The detaileddescription will be presented in Section3.4.

Let [umin(k)i , umax(k)

i ] denote the searching region of con-trol variable during theith time stage in thekth iteration. Theyare represented by Eqs.(4) and(5).

umax(k)i = u0(k)

i + w(k)

2, i = 1, 2, . . . , n (4)

umin(k)i = u0(k)

i − w(k)

2, i = 1, 2, . . . , n (5)

Uniformly divide searching region [umin(k)i , umax(k)

i ] into( lei tionp pre-s e fatla

f

[ r.

3

erf dw rep-

oos, 2000; Walter & Gutjahr, 2000). The convergenceCA was also discussed (Walter & Gutjahr, 2002).

As Dorigo foresaw, ant-colony algorithm (ACA) has bepplied into in many fields, such as electric power sysSong, Chou, & Stonham, 1999) and chemical engineerinJayaraman et al., 2000).

Recently, continuous ant-colony algorithm (CACA)erivation of ant-colony algorithm in combination with idf GA, was proposed (Jayaraman et al., 2000). CACA was

hen used for dynamic optimization (Rajesh et al., 2001), inhich CACA was applied to optimize the piecewise linrofile approximating the dynamic system simultaneond the satisfactory results were obtained.

. Iterative ant-colony algorithm (IACA)

According to the discussion in Section1, the mainframe oACA used in this paper includes: (i) discretizing time internd control region, (ii) searching optimal control profile

he discrete system using ant-colony algorithm (ACA)iii) reducing search region and returning to step (i) for nteration until reaching convergence.

.1. Discretization of time interval and control region

Uniformly divide the time interval [0,tf ] into n stageso theith time stage is [t1, ti+1] (i = 1,2,. . ., n), wheret1 = 0ndtn+1 = tf . IACA is to search the best control profile of tiscrete time system as an approximation of the continroblem.

p − 1) parts, as shown inFig. 1. The feasible control variabn (t1, ti+1) was only chosen among the values of partioints. Thus, a complete feasible control profile is reented by piecewise constant functions, for example, thine shown inFig. 1. Ant-colony algorithm in Section3.3ims at finding the best control profile inFig. 1.

In order to makeu0(k)i lie in the middle line o

umin(k)i , umax(k)

i ], the value ofp should be an odd numbe

.2. Route and control profile

All partition points inFig. 1 are remarked by the integrom 1 top in turn. As shown inFig. 2, a two-dimensional griith p × n nodes is constructed. Use a route of an ant to

Fig. 1. Discretization of time interval and control region.

Page 4: Iterative ant-colony algorithm and its application to dynamic optimization of chemical process

B. Zhang et al. / Computers and Chemical Engineering 29 (2005) 2078–2086 2081

resent a feasible control profile, thus a route hasn elements,each of which represents a feasible control strategy at a timestage. For example, the feasible control profile denoted bythe fat line inFig. 1is represented byn black nodes inFig. 2,which forms a complete route of{2, 3, 3, 2,. . ., 2, . . ., 3}.Use routei to represent the value of theith element of a route,then the corresponding control strategy can be representedby Eq.(6).

ui = umin(k)i + (routei − 1)w(k)

(p − 1)(6)

For the problem with two control variablesu1 and u2,uniformly divide searching region of each control variableinto (p − 1) parts as the case with single variable. Use a routeto represent two feasible control profiles and thus a route have2n elements routei (i = 1, 2,. . ., 2× n). The formern elementsrepresentu1, while the lattern elements representu2.

Thus, we build a relationship between feasible controlprofiles and routes of ant-colony. When routes are given,the state variables and performance index can be computedconveniently through state equation and objective function.Fitness of each ant will also be computed. The details willbe presented in Section3.3. It can be seen the inconve-nience of IDP (i.e., the discretization of states variables) isavoided.

3

ro-fim de.Pg da s atai

3an

ac y at

first,

Piv = τ(v, i + 1, g)∑pv=1τ(v, i + 1, g)

v = 1, 2, . . . , p (7)

wherePiv is the probability for this ant to move to thevthnode in the (i + 1)th column, which can also be called attrac-tion power weight of thevth node. Then move this ant to thevth node in the (i + 1)th column with the probability ofPiv .Record the movement of this ant. Ifi = n, force ants move tothe first column.

m ants move in turn according to the above regulation untileach ant go throughn columns and traveln nodes. Thus,mroutes, which representm feasible control profiles, come intobeing.

3.3.2. Calculation of fitnessAfter each ant has gone throughn columns, calculateui

(i = 1, 2, . . ., n) according to Eq.(6) and objective functionsby numerical method. Fitness is then computed by Eq.(8).

fit(j, g) = Jmax − J [u(j, g)]

Jmax − Jmin(8)

whereJmax = maxj

{J [u(j, g)]}, Jmin = minj

{J [u(j, g)]}, and

J[u(j,g)] is the objective function of thejth ants. IfJmax= Jmin,fi

3ious

p sub-s So asf atedb

τ

wic

w antt thisn willd ts got y them

3go

t s itsu i-m

.3. Implementation of ACA

Ant-colony algorithm aims at finding the best control ple in Fig. 1. Let the total number of ants bem = p × n. Placeants onm nodes inFig. 2, and each ant occupies a no

heromone density on thevth node at theith column in theth cycle is represented byτ(v, i, g), which can also be callettraction power. Let initial values of pheromone densitiell nodes be a constant, that isτ(v, i, 0) =C(v = 1, 2, . . ., p,= 1, 2,. . ., n).

.3.1. Migration of antsForcem ants move from left to right in turn. When

nt, for example, move from theith column to the (i + 1)tholumn, it is needful to calculate the transition probabilit

Fig. 2. Route of a control profile.

t(j, g) = 1.

.3.3. Update of pheromone densityAfter all ants have traveled complete routes, the prev

heromone density is evaporated, but new pheromonetance is laid on a node by every ant goes through this.or node (v, i), the updated pheromone density is calculy Eq.(9).

(v, i, g + 1) = �τ(v, i, g) + �τ(v, i, g) (9)

here� is the residue ratio of pheromone,�τ(v, i, g) is thencreased part of pheromone density on node (v, i) and isalculated by Eq.(10).

�τ(v, i, g) = q∑

j

fit(j, g)

if thejth ant has gone through node (v, i) (10)

hereq is a constant. It is seen that, if a node is in anour with larger value of fitness, pheromone density onode will become larger. In the next cycle, this nodeisplay more attraction power to ants; in return more an

hrough this node and lay pheromone there. It is exactlechanism of positive feedback.

.3.4. Start next cycleLet g = g + 1 and ants start next tour until all ants

hrough the same trip or the number of cycles reachepper limit NC. Useu∗

i (i = 1, 2, . . . , n) to denote the optal control profile.

Page 5: Iterative ant-colony algorithm and its application to dynamic optimization of chemical process

2082 B. Zhang et al. / Computers and Chemical Engineering 29 (2005) 2078–2086

Convergence is one of the criteria to check whetherthe optimization algorithm is feasible or not. The itera-tion process of IACA is an positive feedback that make thepheromone substance on the node with more pheromone sub-stance become more and more, while make the pheromonesubstance on the node with less pheromone substance becomeless and less. As the certain result of positive feedback, allants finally travel the same route. Thus, the convergence ofACA is guaranteed. Several case studies tested and verifiedthe convergence of ACA in Section5.

3.4. Iteration

Discretization of control region inevitably brings devi-ation and makes the result inaccurate. It is necessary tomove baseline for the next iteration to the optimal value justobtained in the last iterationu∗

i (i = 1, 2 . . . , n) and also usethe region-reduction strategy represented by Eqs.(11) and(12) to make results more accuracy.

u0(k)i = u∗

i i = 1, 2, . . . , n, k = 1, 2, 3, . . . (11)

w(k) = ρw(k−1) (12)

whereρ is reduction ratio, 0 <ρ < 1. Then return to Section3.1, start discretization and run ant-colony algorithm againu (k) (k+1) (k) ni

3

ofI ablee

S t

S arch

S

S

S ra-

4

fc tos e.

no ionp plor-i tionW ffi-

cient for complete exploration, which significantly lightensthe workload of discretization and ACA.

ρ is reduction ratio. Too small a value cannot guaranteethe deviation of the suboptimal profile be eliminated duringiterations while too large a value will increase workload ofIACA. It is suggested the value ofρ be chosen between [0.5,0.8].

NC, �, q and C are the parameters of ACA. It is sug-gested the value of NC be large enough to ensure as manyas possible ants go through the same trip, so it varies fromdifferent problems. The values of� andq/C influence theupdated pheromone density. According to Dorigo (Dorigo,Maniezzo, & Colorni, 1996), the value of� is chosen as 0.8in this paper; the value ofq/C is suggested to be chosen as0.2 based on our empirically studies on some problems.

5. Case studies for IACA

In this section, IACA is applied to several case studies. Thevalues of parameters are:p = 5, ρ = 0.5,ε = 0.0001,� = 0.8,andq/C = 0.2. The value of NC is determined by simulationresults.

5.1. Problem 1: Temperature control for consecutivereaction

d bym oo they per-ato ase.W

m

s

2

w dT

thefi ionv sg ches1 lue,w ndardR

ntil |J − J | < ε, whereJ is the best objective function thekth iteration.

.5. Main procedure of algorithm

In summary, giventf ,umax, andumin, the main procedureACA is described as below (the denotation of each varimployed here is shown in the Nomenclature):

tep 1: Determine the values ofn, p, ρ, C, �, q and NC, lek = 0, and computem, u0(0)

i andw(0) with x(0);tep 2: Execute discretization of time interval and se

region of control variables according to Section3.1;tep 3: Searchu∗

i (i = 1, 2,. . ., n) using ant-colony algorithmaccording to Section3.3;

tep 4: Letk = k + 1 and computeu0(k)i andw(k) according

to Section3.4;tep 5: If|J(k) − J(k+1)| > ε, return to Step 2, else stop ite

tion.

. Parameters setting

Time interval partition numbern influences precision oontrol profile and objective function. There is no ruleetting this number, but an oversize value is unfavorabl

Variable region partition numberp influences the precisiof control profile in each step of iteration. Without iteratrocess, a very large partition number is required for ex

ng the whole searching region to get an optimal soluith iteration, a small partition number like 5 or 7 is su

Problem 1 is the consecutive reaction problem studieany researches (Rajesh et al., 2001). The objective is tbtain the optimal temperature profile that maximizesield of the intermediate product A2 at the end of 1 h of otion in a batch reactor where the reaction A1→ A2 → A3

akes place. The math model is described in Eq.(13). A valuef 0.610775 for the optimum has been reported for this cith CACA, Rajesh obtained a value of 0.61045.

ax J(T ) = x2(tf ) (13)

.t.dx1

dt= −4000 exp

(−2500

T

)x2

1, x1(0) = 1

dx2

dt= 4000 exp

(−2500

T

)x2

1− 620000 exp

(−5000

T

)x2,

x2(0) = 0

98≤ T ≤ 398, tf = 1

herex1 is concentration of A1,x2 concentration of A2, anis temperature (K).Let n = 10, and use IACA to solve the problem. In

rst iteration (k = 0), the curve of the best objective functersus number of cycles is shown inFig. 3. It shows all anto through the same trip when number of cycles rea20 and ACA is convergent. Give NC a conservative vahich is 150. The state variables are computed using staunge–Kutta method.

Page 6: Iterative ant-colony algorithm and its application to dynamic optimization of chemical process

B. Zhang et al. / Computers and Chemical Engineering 29 (2005) 2078–2086 2083

Fig. 3. Objective function vs. number of cycles for problem 1 (n = 10).

Fig. 4. Optimal control profile for problem 1 (n = 10).

IACA was carried out for 20 times. Each time it was endedafter four steps of iteration without exception and the samevalues of objective function and optimal control profile wereobtained. This fact shows the algorithm is stable and robust.The objective function by IACA is 0.6100. The optimal con-trol profile is shown inFig. 4.

Let n = 20, in the first step of iteration, the curve of thebest objective function versus number of cycles is shown inFig. 5. It shows all ants go through the same trip when number

Fig. 5. Objective function vs. number of cycles for problem 1 (n = 20).

Fig. 6. Optimal control profile for problem 1 (n = 20).

of cycles reaches 220 and again ACA is convergent. So, letNC = 220. This time we were able to reach a value of 0.6104.The optimal control profile forn = 20 is shown inFig. 6. Itcan be seen that the objective function was improved with theincrease of the value ofn.

As for the objective function, there is almost no differencebetween CACA and IACA. The results of this case showIACA is feasible.

5.2. Problem 2

Problem 2 is a four-state variable system where the statevariablex4 is to be minimized. The math model is describedin Eq. (14) (Rajesh et al., 2001). A value of 0.12011 as theoptimum has been reported using the iterative dynamic pro-gramming technique. With CACA, Rajesh obtained a valueof 0.1290, which is not very satisfying.

minJ(u) = x4(tf ) (14)

s.t.dx1

dt= x2,

dx2

dt= −x3u + 16t − 8

dx3

dt= u,

x

enk bero ot 0. LetN besta lonyw

dardR es,a y, them thin

dx4

dt= x2

1 + x22 + 0.0005(x2 + 16t − 8 − 0.1x3u

2)2

(0) = (0, −1, −√

5, 0)T, −4 ≤ u ≤ 10, tf = 1

Let n = 16, and use IACA to solve the problem. Wh= 0, the curve of the best objective function versus numf cycles is shown inFig. 7. It illustrates that nearly all ants g

hrough the same trip when number of cycles reaches 60C = 600. The current control profile perhaps is not thet this stage, but iteration and rearrangement of ant-coill help us to find the optimum in the end.The state variables are computed using stan

unge–Kutta method. IACA was carried out for 20 timnd each time it was ended after seven iterations. Finallinimal value obtained reached 0.1235, which lies wi

Page 7: Iterative ant-colony algorithm and its application to dynamic optimization of chemical process

2084 B. Zhang et al. / Computers and Chemical Engineering 29 (2005) 2078–2086

Fig. 7. Objective function vs. number of cycles for problem 2.

±2.82% of the reported optimum. The optimal control pro-file has been given inFig. 8. As for problem 2, IACA performsbetter than CACA.

5.3. Problem 3: Feeding-rate optimization of foreignprotein production

Foreign protein production by recombinant bacteriaattracted attention of some researchers (Lee & Ramirez,1994; Roubos et al., 1999). The nutrient and the induce arerequisite for foreign protein production. The detailed modelis shown inAppendix A. In order to yield maximal economicbenefit, it is required to control the glucose feed rate and theinducer feed rate. By GA, Roubos et al. obtained an optimalvalue of 0.8149.

Let n = 10, and use IACA to solve the problem. Now, thetotal number of ants,m is100. In the first step of iteration,the curve of the best objective function versus number ofcycles is shown inFig. 9. It illustrates that almost all ants gothrough the same trip when number of cycles reaches 400.Let NC = 400. It is large enough for us to find the optimumfinally.

Fig. 9. Objective function vs. number of cycles for feeding problem.

Fig. 10. Optimal profile of glucose feed rate.

The state variables are computed using standardRunge–Kutta method. IACA was carried out for 20 timesand each time it was ended after three iterations (1200 cyclesin total). The maximal value reached 0.8158. The optimalcontrol profile by IACA is shown inFigs. 10 and 11. For thisproblem,Roubos et al. (1999)obtained a value of 0.8149using GA after 25,000 generation. Clearly, IACA providesbetter result at faster speed.

Fig. 11. Optimal profile of inducer feed rate.

Fig. 8. Optimal control profile for problem 2.
Page 8: Iterative ant-colony algorithm and its application to dynamic optimization of chemical process

B. Zhang et al. / Computers and Chemical Engineering 29 (2005) 2078–2086 2085

6. Conclusion

In this work, the utility of iterative ant-colony algorithm(IACA) have been illustrated for solving dynamic optimiza-tion problems. The robust algorithm which iteratively exe-cutes ant-colony algorithm has the merits of both IDP andstochastic algorithm. Several case studies showed ACA isconvergent and IACA has the ability to provide optimal solu-tions.

Dynamic optimization problems are often encountered inthe design and operation of chemical systems and IACAapproach can be regarded as a reliable and useful optimiza-tion tool for solving such problems.

Acknowledgement

This work was supported by the National Natural ScienceFoundation (20276063) of China.

Appendix A

Foreign protein production using recombinant bacteriawas modeled by Lee and Ramirez. The model is described asb ntrolv

e(g -c thr wthr ef vely.Ac -thr

yE y-l stem

parameters for this bacteria system are given as below.

µ = 0.407x3

(0.108+ x3 + x23)/14814.8

×[x6 + x7

(0.22

0.22+ x5

)]Cif = 4.0

Rfp = 0.095x3

(0.108+ x3 + x23)/14814.8

×(

0.0005+ x5

0.022+ x5

)Cnf = 100.0

k1 = k2 = 0.09x5

0.034+ x5Y = 0.51

The objective is to maximizeJ(u1, u2) by controlling theglucose feed rate (u1) and the inducer feed rate (u2).

J(u1, u2) = x4(tf )x1(tf ) − Q

∫ tf

0u2(t) dt

whereQ is the price ratio of inducer to product, a value of 5is chosen.tf = 10 h. The initial conditions for the system arex(0)=[1, 0.1, 40, 0, 0, 1, 0]T. Restrict the control variables(u1, u2) in the range of [0, 10−2 L/h].

R

C ofints

D on-ming.

D pti-

D ing

D and

E tion-order

E uc-

J lony

K amic

L ced

L era-

elow, which involves seven state variables and two coariables.

dx1

dt= u1 + u2

dx2

dt= µx2 − u1 + u2

x1x2

dx3

dt= u1

x1Cnf − u1 + u2

x1x3

−Y−1µx2dx4

dt= Rfpx2 − u1 + u2

x1x4

dx5

dt= u2

x1Cif − u1 + u2

x1x5

dx6

dt= −k1x6

dx7

dt= k2(1 − x7)

The state variables (x1, x2, . . ., x7) are: reaction volumx1 in L), cell density (x2 in g/L), nutrient concentration (x3 in/L), foreign protein concentration (x4 in g/L), inducer conentration (x5 in g/L), inducer shock factor on the cell growate (x6), and the inducer recovery factor on the cell groate (x7). The two control variables (u1, u2) are the glucoseed rate (L/h) and the inducer feed rate (L/h), respectidditional variables are growth yield coefficient (Y), nutrientoncentration in nutrient feed (Cnf in g/L), inducer concenration in inducer feed (Cif in g/L), specific growth rate (µ in−1), foreign protein production rate (Rfp in h−1), shock andecovery parameters (k1, k2).

Consider the production of �-galactosidase bscherichia coli D1210 and plasmid pSD8. Isoprop

thiogalactoside (IPTG) was used as the inducer. The sy

eferences

hen, C. L., Sun, D. Y., & Chang, C. Y. (2002). Numerical solutiondynamic optimization problems with flexible inequality constraby iterative dynamic programming.Fuzzy Sets and Systems, 127,165–176.

adebo, S. A., & Mcauley, K. B. (1995). Dynamic optimization of cstrained chemical engineering problems using dynamic programComputers and Chemical Engineering, 19(5), 513–525.

origo, M., Maniezzo, V., & Colorni, A. (1996). The ant system: Omization by a colony of cooperating agents.IEEE Transactions onSystems, Man, and Cybernetics-Part B, 26(1), 1–13.

origo, M., & Gambardella, L. M. (1997). Ant colonies for the travelsalesman problem.Bio Systems, 43, 73–81.

origo, M., Bonabeau, E., & Theraulaz, G. (2000). Ant algorithmsstigmergy.Future Generation Computer Systems, 16, 851–871.

va, B. C., Julio, R., Banga, A. A., et al. (2001). Dynamic optimizaof chemical and biochemical processes using restricted secondinformation.Computers and Chemical Engineering, 25, 539–546.

mil, H. E., & Jens, G. B. (2001). Dynamic optimization and prodtion planning of thermal cracking operation.Chemical EngineeringScience, 56, 989–997.

ayaraman, V. K., Kulkarni, B. D., Karale, S., et al. (2000). Ant coframework for optimal design and scheduling of batch plants.Com-puters and Chemical Engineering, 24, 1901–1912.

vamsdal, H. M., Svendsen, H. F., Hertzberg, T., et al. (1999). Dynsimulation and optimization of a catalytic steam reformer.ChemicalEngineering Science, 54, 2697–2706.

ee, J., & Ramirez, W. F. (1994). Optimal fed-batch control of induforeign protein production by recombinant bacteria.AIChE Journal,40, 899–907.

uus, R. (1993). Optimization of fed-batch fermentors by ittive dynamic programming.Biotechnology and Bioengineering, 41,599–602.

Page 9: Iterative ant-colony algorithm and its application to dynamic optimization of chemical process

2086 B. Zhang et al. / Computers and Chemical Engineering 29 (2005) 2078–2086

Maniezzo, V., & Carbonaro, A. (2000). An ANTS heuristic for the fre-quency assignment problem.Future Generation Computer Systems,16, 927–935.

Patrick, R., & McMullen. (2000). An ant colony optimization approach toaddressing a JIT sequencing problem with multiple objectives.Artifi-cial Intelligence in Engineering, 15, 309–317.

Pham, Q. T. (1998). Dynamic optimization of chemical engineeringprocesses by an evolutionary method.Computers and Chemical Engi-neering, 22(7), 1089–1097.

Rajesh, J., Gupta, K., & Kusumakar, H. S. (2001). Dynamic optimizationof chemical processes using ant colony framework.Computers andChemistry, 25, 583–595.

Roubos, J. A., De Gooijer, C. D., van Straten, G., et al. (1997). Com-parison of optimization methods for fed-batch cultures of hybridomacells. Bioprocess Engineering, 17, 99–102.

Roubos, J. A., van Straten, G., & van Boxtel, A. J. B. (1999). An evolu-tionary strategy for fed-batch bioreactor optimization: Concepts andperformance.Journal of Biotechnology, 67, 173–187.

Rusnak, A., Fikar, M., Latifi, M. A., et al. (2001). Receding horizoniterative dynamic programming with discrete time models.Computersand Chemical Engineering, 25, 161–167.

Song, Y. H., Chou, C. S., & Stonham, T. J. (1999). Combined heat andpower economic dispatch by improved ant colony search algorithm.Electric Power Systems Research, 52, 115–121.

Stutzle, T., & Hoos, H. H. (2000). MAX–MIN ant system.Future Gen-eration Computer Systems, 16, 889–914.

Talbi, E. G., Roux, O., Fonlupt, C., et al. (2001). Parallel ant coloniesfor the quadratic assignment problem.Future Generation ComputerSystems, 17, 441–449.

Walter, J., & Gutjahr, A. (2000). Graph-based ant system andits convergence.Future Generation Computer Systems, 16, 873–888.

Walter, J., & Gutjahr, A. (2002). ACO algorithms with guaranteed con-vergence to the optimal solution.Information Processing Letters, 82,145–153.