5
A New Ant Evolution Algorithm to Resolve TSP Problem QingBao Zhu Shuyan Chen Zhu Qing-bao. He is a Professor at Dept. of Computer Science of Nanjing Normal University. His research interests include AI and Intelligent optimization . E-mail:[email protected] Abstract Traveling Salesman Problem(TSP) is a combinatorial optimization problem. A new ant evolution algorithm to resolve TSP problem is proposed in this paper. Based on the latest achievement of research on actual ants, the algorithm firstly takes a set of Pareto optimal solution, which is obtained by scout ants using nearest-neighbor search and diffluence strategy, as the initial population. Then the operators of genetic algorithm, including self-adaptive crossover, mutation and inversion which have the strong local search ability, to speed up the procedure of optimization. Consequently, the optimal solution is obtained relatively fast. The experimental results showed that, the algorithm proposed in this paper is characterized by fast convergence, and can achieve better optimization results. Keywords: TSP problem, Nearest-neighbors, Scout ant, Self-adaptive mutation 1. Introduction The Ant Colony Optimization Algorithm [1] [2] brought forward by M.Dorigo, etc. has been a hot research and have been used widely. However, the initial algorithm tends toward stagnation and premature, which has become the common understanding. Therefore, many academicians presented some improved algorithms, such as Min-Max ant algorithm [3] [4] , and dynamic pheromone updating algorithm [5] [6] , etc. However, the improvements in these literatures are only based on the prototype model presented by M.Dorigo, etc. Although these algorithms took a certain success, their improving effect is limited because of the restriction of prototype model. Does the prototype model presented by M.Dorigo, etc. simulate the course of ants to search food completely? Is it exclusive model? The author studied some latest research findings about actual ants, which shows that when ants are searching food, scout ants scout for food here and there at first. After they found out food, they will mark road sign using pheromone [7] , furthermore, they will show the direction of food with a certain geometrical angle [8] . There are two kinds of pheromones, one is to mark road sign, the other is to call worker ants to move food. These worker ants have perceptual function through antenna. When they feel the calling from scout ants, they can search food according to the road signs left by scout ants [7] . These worker ants can also leave pheromones. The pheromone on the food-search route becomes stronger and stronger consequently, and more and more ants will come here following the smell. It can form a positive feedback. The research findings also show that ants own automatic shunt function to avoid the crowding on food-search route way [9] . It is shown by the latest research findings about actual ants that, the model presented by M.Dorigo, etc. only simulates partial courses of ants to search food. Their algorithm mainly depends on the probability search when q<q 0 to maintain the diversity of search [2] . It is proved by both theory and experiments that it will take a very long time to jump out of local optimal solution in this way. Numerous improved algorithms have been proposed to address this problem, but the improvement is very limited. In this paper a kind of new ant evolution algorithm to resolve travelling salesman problem (TSP) is studied according to the strong global-searching ability of ant colony algorithm at the initial procedure and local-search advantage of mutation and inverse, etc in the genetic algorithms . Based on the food-search course of actual ants, this algorithm can obtain a set of Pareto optimal solutions rapidly as initial population by carrying through food-search optimization according to nearest-neighbor search rule and enhancing diversity of search with automatic shunt principle. Then, a improved self-adaptive genetic algorithm is employed to gain the optimal solution. This research can provide the researchers in this field with a kind of new concept, and open up a new space for them. 2Description and Definition of Problem For simplicity, the following definitions are given at first: Let C={c 1 ,c 2 ,…, c n } be a set of n cities, R={1, 2, …, n} be a set of subscripts or serial numbers of city. If rectangular coordinate is built in AS, c i C, iR all will Sixth International Conference on Machine Learning and Applications 0-7695-3069-9/07 $25.00 © 2007 IEEE DOI 10.1109/ICMLA.2007.18 62 Sixth International Conference on Machine Learning and Applications 0-7695-3069-9/07 $25.00 © 2007 IEEE DOI 10.1109/ICMLA.2007.18 62 Sixth International Conference on Machine Learning and Applications 0-7695-3069-9/07 $25.00 © 2007 IEEE DOI 10.1109/ICMLA.2007.18 62 Sixth International Conference on Machine Learning and Applications 0-7695-3069-9/07 $25.00 © 2007 IEEE DOI 10.1109/ICMLA.2007.18 62 Sixth International Conference on Machine Learning and Applications 0-7695-3069-9/07 $25.00 © 2007 IEEE DOI 10.1109/ICMLA.2007.18 62 Sixth International Conference on Machine Learning and Applications 0-7695-3069-9/07 $25.00 © 2007 IEEE DOI 10.1109/ICMLA.2007.18 62

[IEEE Sixth International Conference on Machine Learning and Applications (ICMLA 2007) - Cincinnati, OH, USA (2007.12.13-2007.12.15)] Sixth International Conference on Machine Learning

  • Upload
    shuyan

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Page 1: [IEEE Sixth International Conference on Machine Learning and Applications (ICMLA 2007) - Cincinnati, OH, USA (2007.12.13-2007.12.15)] Sixth International Conference on Machine Learning

A New Ant Evolution Algorithm to Resolve TSP Problem

QingBao Zhu , Shuyan Chen

Zhu Qing-bao. He is a Professor at Dept. of Computer Science of Nanjing Normal University. His research interests include AI and Intelligent optimization .

E-mail:[email protected]

Abstract

Traveling Salesman Problem(TSP) is a combinatorial optimization problem. A new ant evolution algorithm to resolve TSP problem is proposed in this paper. Based on the latest achievement of research on actual ants, the algorithm firstly takes a set of Pareto optimal solution, which is obtained by scout ants using nearest-neighbor search and diffluence strategy, as the initial population. Then the operators of genetic algorithm, including self-adaptive crossover, mutation and inversion which have the strong local search ability, to speed up the procedure of optimization. Consequently, the optimal solution is obtained relatively fast. The experimental results showed that, the algorithm proposed in this paper is characterized by fast convergence, and can achieve better optimization results.

Keywords: TSP problem, Nearest-neighbors, Scout ant, Self-adaptive mutation

1. Introduction

The Ant Colony Optimization Algorithm [1] [2] brought forward by M.Dorigo, etc. has been a hot research and have been used widely. However, the initial algorithm tends toward stagnation and premature, which has become the common understanding. Therefore, many academicians presented some improved algorithms, such as Min-Max ant algorithm[3] [4], and dynamic pheromone updating algorithm[5] [6], etc. However, the improvements in these literatures are only based on the prototype model presented by M.Dorigo, etc. Although these algorithms took a certain success, their improving effect is limited because of the restriction of prototype model.

Does the prototype model presented by M.Dorigo, etc. simulate the course of ants to search food completely? Is it exclusive model? The author studied some latest research findings about actual ants, which shows that when ants are searching food, scout ants scout for food here and there at first. After they found out food, they will mark road sign using pheromone [7], furthermore, they will show the direction of food with a certain geometrical angle [8]. There are two kinds of pheromones, one is to mark road sign, the

other is to call worker ants to move food. These worker ants have perceptual function through antenna. When they feel the calling from scout ants, they can search food according to the road signs left by scout ants [7]. These worker ants can also leave pheromones. The pheromone on the food-search route becomes stronger and stronger consequently, and more and more ants will come here following the smell. It can form a positive feedback. The research findings also show that ants own automatic shunt function to avoid the crowding on food-search route way [9].

It is shown by the latest research findings about actual ants that, the model presented by M.Dorigo, etc. only simulates partial courses of ants to search food. Their algorithm mainly depends on the probability search when q<q0 to maintain the diversity of search [2]. It is proved by both theory and experiments that it will take a very long time to jump out of local optimal solution in this way. Numerous improved algorithms have been proposed to address this problem, but the improvement is very limited. In this paper a kind of new ant evolution algorithm to resolve travelling salesman problem (TSP) is studied according to the strong global-searching ability of ant colony algorithm at the initial procedure and local-search advantage of mutation and inverse, etc in the genetic algorithms . Based on the food-search course of actual ants, this algorithm can obtain a set of Pareto optimal solutions rapidly as initial population by carrying through food-search optimization according to nearest-neighbor search rule and enhancing diversity of search with automatic shunt principle. Then, a improved self-adaptive genetic algorithm is employed to gain the optimal solution. This research can provide the researchers in this field with a kind of new concept, and open up a new space for them. 2.Description and Definition of Problem

For simplicity, the following definitions are given at first:

Let C={c1,c2,…, cn} be a set of n cities, R={1, 2, …, n} be a set of subscripts or serial numbers of city. If rectangular coordinate ∑ is built in AS, ci∈C, i∈R all will

Sixth International Conference on Machine Learning and Applications

0-7695-3069-9/07 $25.00 © 2007 IEEEDOI 10.1109/ICMLA.2007.18

62

Sixth International Conference on Machine Learning and Applications

0-7695-3069-9/07 $25.00 © 2007 IEEEDOI 10.1109/ICMLA.2007.18

62

Sixth International Conference on Machine Learning and Applications

0-7695-3069-9/07 $25.00 © 2007 IEEEDOI 10.1109/ICMLA.2007.18

62

Sixth International Conference on Machine Learning and Applications

0-7695-3069-9/07 $25.00 © 2007 IEEEDOI 10.1109/ICMLA.2007.18

62

Sixth International Conference on Machine Learning and Applications

0-7695-3069-9/07 $25.00 © 2007 IEEEDOI 10.1109/ICMLA.2007.18

62

Sixth International Conference on Machine Learning and Applications

0-7695-3069-9/07 $25.00 © 2007 IEEEDOI 10.1109/ICMLA.2007.18

62

Page 2: [IEEE Sixth International Conference on Machine Learning and Applications (ICMLA 2007) - Cincinnati, OH, USA (2007.12.13-2007.12.15)] Sixth International Conference on Machine Learning

have certain coordinate (xi, yi)in ∑, notated by ci (xi, yi) . The connecting line of each city in AS constitutes side-set. The city i∈R is called as the node. The connecting line between any two cities is called as side and notated as eij, i, j∈R.

Definition 1. Let d(ci, cj) be the distance or side length between any two cities, i, j∈R, described as dij and satisfying dij=dji, dij can be calculated by formula (1). The distance dij also can be described as side length dl.

Obviously, the distance between any two cities constitutes a distance matrix.

Definition 2. The position of ant k at any time in AS is described as P, and the position of ant k at time ti located at a certain node as Pi or P(ti).

Assume ant k starts of from node i0 at time t0 and return to node i0 at time te after traversing all nodes. Let tabuk={ P(t0), P(t1), …, P(tj)} be the set of nodes passed by ant k from time t0 to tj. At time tj+1, ∀P(tj+1)�C and ∀P(tj+1) �tabuk, then ∀P(tj+1) is called as the forbidden point at time tj+1.

Definition 3. The connection line of every points in tabuk constitutes the path walked by ant k , notated by Pathk (P0, Pe), wherein P0 is the starting point. In space, P0 =Pe, but Pe is termination point in time domain. Its length is called as path, and the length of path is described as L and calculated by Formula (2), wherein dl is calculated by Formula (1).

Definition 4. }),(,|{)),(( mindccdCccyxcBR iiiii ≤∈= is called as the set of neighboring nodes of ci, wherein, dmin is the distance threshold value set according to concrete

problem and the required size of neighbor set. Let w=|BRi|, which denotes the size of neighbor set. Altering w is equivalent to alter dmin.

Definition 5. Assume at time ti , ant k is located at ci, } , )),,((|{)),(( RitabucyxcBRccyxcWk kiiiiiiii ∈∉∈= is called

as the feasible domain of k at time ti in BRi. At time ti+1, ∀P(ti+1)�Wki and ∀P(ti+1)∉ tabuk, ∀P(ti+1) may be called as the feasible node at time ti+1. The set of all feasible nodes can be denoted as Z. Obviously, |Z|≤|Wki|.

Definition 6. ),(1 jiij ccd=η is called the heuristic function by which the ant chooses node j from node i.

Definition 7. Let ),,,,( 121 bbbba ni = be a chromo-some, which is also called as individual, wherein, bi�R is any sequence number of cities, which is also called as gene. Their location is called as locus. l is called as the chain length of individual. The whole of individual is described as l

n bbbbS ),,,,( 121= called as individual space. Obviously, according to Definition 7, its chain length

l =n+1.

Definition 8. The set of N individuals defined by Definition 7 is called as population. (These individuals are allowed to be same). N is called as population size. An example of population A is given as follows, (the individuals in which can be same):

According to Definition 7, TNaaaA ),,,( 21=

Definition 9. Fitness function is the project of individual space S to real-number space, i.e. +→ RSf : , wherein, f is shown by Formula (3).

Where, n is city number, N is population size, k>0, is a constant in order to ensure f>0。

3. Ant Colony Evolution Algorithm based on nearest-neighbor scout and diffluence strategy

The basic idea of the algorithm in this paper is to simulate the food-search course of scout ants, which search food by using nearest-neighbor strategy. The algorithm introduces the automatic diffluence idea of ants during this course, that’s to say: a certain node is only allowed to be chosen by q ants, and more than q ants shall be assigned to the other nodes so that the search diversity of the algorithm can be enhanced greatly. Every generations search change

their search range through altering ant number m, q and neighborhood size w defined in Definition 4. It is shown by experiments that, the whole search ability of the ant colony algorithm using the strategies mentioned above is very strong, which can converge to a near-optimal solution rapidly. However, it is relatively difficult converging from near-optimal solution to the optimal solution. Taking advantage of the strong local optimization ability of mutation algorithm, etc., we carry through genetic operation on the optimization solution found by ant colony algorithm, so that to quicken local optimization greatly. Therefore, using the ant colony algorithm mentioned above, less than 20 generations of research is needed. It can gain N shortest paths as initial population, and then it carries through an improved genetic algorithm in order to gain the optimal solution or near-optimal solution. This algorithm can be divided into two stages, i.e. ant scout algorithm characterized by shunt and genetic algorithm.

The first stage of the algorithm: obtain the optimized population using ant scout algorithm characterized by shunt. The procedure of the algorithm is shown as follows:

Step1: Initialization: randomly choose a city i0 as the starting point, place m ants on this node and set it into tabuk. n cities are used as starting points respectively to sort according to distance, which will create n sequence tables. Then set an algebraic counter NC=MAX, (MAX is commonly no more than 20). Boundary variable of

(2) ,,, , ) , ( 1

1RjiCccccdddL jijil

e

ll ∈∈== ∑

+

=

(1) )()(),( 22jijiji yyxxccd −+−=

(3) **1

1∑

+

=−=

n

lldNnkf

636363636363

Page 3: [IEEE Sixth International Conference on Machine Learning and Applications (ICMLA 2007) - Cincinnati, OH, USA (2007.12.13-2007.12.15)] Sixth International Conference on Machine Learning

diffluence q=q0. Neighborhood range w=w0. Nodes choose counter qj=0. q0, w0 are constants.

Step2: Take the current city i0 or i as the center and choose w cities nearest to i , according to nearest-neighbor principle, as the range for ant k to choose as the next city j . The front w cities can be chosen according to the distance sequence table of city i. Find z cities unpassed {j1,j2...jp,,jz} from w cities, i.e. kp tabuj ∉

arg max{ ( )}k

ijj tabu

j tη∉

= ( 4)

Step3: At time t, k chooses node j within z cities according to Formula (4) :

If tabuk(t)=j, let qj =qj+1, k=1,,2,…, m. If qj >q, delete j from Z and repeat Step3.

Step4: add jinto taboo list tabuk。 Step5: After m ants finish choosing node j, let inew = j;

jnew=jold+1; qj=0, return to Step 2 to start choosing the next node until all ants finish touring a cycle.

Step6: After m ants finish touring a cycle, calculate lk, and save the path table into Path.

Step7: Let w=w+1;q= q-1;m=m+1,NC=NC+1; clear tabuk, if NC≤MAX, replace m ants on i0 , return to Step2 to carry through a next generation of food-search. Else if NC>MAX change to step8.

Step8: Sort the path table gained by MAX generation of food-search according to its length. Take the front N paths as initial population A. Then carry through the following genetic operation for it.

The second stage of the algorithm: gain optimal solution by using improved genetic algorithm.

According to the characteristics that the initial population gained by the first stage of the algorithm has been the near-optimal solution, the crossover, mutation, inverse algorithm of the genetic operation designed as follows will mainly aim at strengthening the search ability of local solution. In order not to destroy the excellent solution, the large-scale elitist strategy is adopted. The probability of crossover, mutation rates are relatively big at beginning, and they will decrease self-adaptively and dynamically along with the approaching of optimization solution. The procedure of the algorithm is shown as follows:

Step1: Initialization: Set genetic algebraic counter t, mutation step length h=h0, maximum genetic algebra MAXT.

Step2: Self-adaptive crossover The crossover strategy adopted is single point

crossover according to step length. It choose randomly a pair of samples (ai , aj), ai , aj∈A with probability pc. Then choose a cross-point location w randomly and carry through crossover taking step as crossover step length.

Example: Set step=2, w=2 , the sample pairs chosen randomly are respectively:

ai ={1 2 3 4 5 }, aj ={5 1 3 2 4} The new individual produced after crossover is:

ai ’={2 4 3 1 5}, aj ’={5 2 3 4 1} In the course of genetic operation, its solution will

approach the optimal solution gradually. Too big crossover rate may destroy the optimization solution at the later stage of genetic operation. However, too small crossover rate may affect the diversity of search at the earlier stage. Therefore, self-adaptive crossover rate is adopted, which can decrease continuously along with the genetic course. The crossover rate pc can be decided by Formula (5).

pc=0.2-β*t (5) wherein, β—coefficient of proportionality, 0<β<0.01;

t is the current genetic generation number. It It should be noted that, if pc<0.001, set pc=0.001.

Step3: Self-adaptive mutation operation Mutation is the mapping from individual space to

individual space SSTm →: , the individual component value is usually changed independently with probability pm. The range of pm is commonly taken as 0.001~0.01. Therefore, small mutation probability can produce few new individuals. In order to overcome premature and increase diversity of search, self-adaptive mutation algorithm is adopted. At the earlier genetic stage, a bigger mutation rate is taken (for example, pm=0.1). Then the non-excellent solution after mutation is accepted with a certain probability. Mutation rate will decrease along with the progress of genetic operation. Moreover, pm will change with fitness in order not to make the good model destroyed by mutation at the latter stage. The population obtained by this algorithm has been near-optimal solution, so mutation shall be carried through within small range.

The procedure of the algorithm is shown as follows: (1) Let tDfafafp ii

im //))()(( max α+−= , tDps λ/1= ,

wherein, imp is the genic mutation rate of individual ai Ps

is individual mutation rate, )( iaf is average fitness of this generation, D and D1 are constants, α and λ are small coefficients. It is very obvious that, i

mp will change self-adaptively along with the fitness. The bigger the fitness is, the smaller the mutation rate is. Furthermore,

imp and Ps will decrease with the increase of generation

number t. (2) In A, take an individual ),,,,( 121 bbbba ni =

randomly with mutation probability Ps, and create the first mutation gene location i with mutation probability i

mp . Then create another mutation gene location j randomly within the range of i+h. Moreover, j≠i. Exchange bi with bj to create a new individual ai’.

(3) Calculate f(ai’) and let ∆f= f(ai’)- f(ai). If ∆f>0, ai’ will be accepted as a new individual, or else it shall be judged that whether ai’will be accepted or not according to probability e-∆f/kT. The probability making ai’ accepted shall obey Boltzmann distribution:

646464646464

Page 4: [IEEE Sixth International Conference on Machine Learning and Applications (ICMLA 2007) - Cincinnati, OH, USA (2007.12.13-2007.12.15)] Sixth International Conference on Machine Learning

F= e-∆f/kT/∑e-∆E/kT0 k is Boltzmann constant and F is the probability distribution.

Step4: Carry through inverse operation adopting inverse operator

In order not to destroy good model at the latter stage of genetic operation, the mutation rate and crossover rate at the latter stage are all relatively small, which make the convergence at latter stage slower. In order to resolve this problem, at the latter stage of genetic operation, inverse operator will introduced and inverse operation will be carried through only for relatively excellent individuals. The method is shown as follows:

Let X be the set of the front m individuals chosen according to F , ∀ ai�X,, its fitness is fi,. If t>Q, carry through the following inverse operation. Q is the threshold of genetic generation number.

for (i=2;i<n;i++) {exchange bi with bi+1, create a new individual ai’

and evaluate its fitness fi’ if (fi<fi’) {keep ai’, inverse complete} else not to keep ai’ } if (i>=n) {for(i=2;i<n-1;i++) exchange bi with bi+2, evaluate fitness fi’ if (fi<fi’) { keep ai’, inverse complete } else not to keep ai’. }

If the better path is not gained yet, cycle the former individuals in a city number successively. The former individuals will become bn, b1, b2,…,bi…,bn-1 , bn . Then repeat the procedures mentioned above.

Step5: Calculate the fitness of each individual f(ai) using Formula (3). ai�A. Sort N individuals in descending order according to the value of fitness, then the following results will be gained:

F={f1, f2, …, fN}, wherein fi is equivalent to fi(ai), and f1> f2>, …, >fN

Step6: Big protection and random selection According to F, copy the individuals to the next

generation, the fitness of which is located among the front q places. The reproductive numbers of q individuals decrease from p to 1 linearly in turn. The individual with the biggest fitness will be copied for more times. The total reproductive number of q individuals is m=ρN, ρ—coefficient of reproduction proportionality, commonly take ρ=0.1. The rest N—m individuals will be created through random selection operator.

The operator selection is to choose an individual within population A, which is a random mapping:

SST Ns →: , according to probability rule, it can be

known:

As for population TNaaaA ),,,( 21= , probability

distribution vector can be formed based on Formula (6) The N—m individuals of the next generation can be

selected in Roulette rule according to Formula (7) .

Step7: t=t+1; if (t%5==0) h=h+1(alter once after 5 genetic generations within mutation range). If t<MAXT, turn to Step2, or else calculate the tour length of the optimal individual Lmin, and output results.

4. Experimental comparison

In order to validate and compare the effects of our algorithm to other algorithms, the author downloaded multi- famous examples of TSP problem from TSPLIB to conduct experiments on them and gained good results. In order to compare with some algorithms improved and proposed recently, the experimental results of the TSP problems addressed in references [10][11] are also introduced in this paper.

Fig.1 The comparison of convergent features of the optimized eil51

The comparison of convergent features on the eil51 TSP problem optimized in different methods is shown in Fig. 1, wherein, abscissa is iteration number, ordinate is the optimal tour length. Curve 1 shows the convergence feature of this algorithm, wherein, the optimized results before 29 generation is obtained using ant colony algorithm, and the results after that generation is obtained using genetic algorithm. Curve 2 shows the convergent feature of ACS algorithm. Curve 3 shows the results of the improved algorithm in Reference[10]. Curve 2 and 3 are all extracted from Reference[10]. It is shown by the experimental results that, both the ACS algorithm and the algorithm in Reference [10] need more than 2500 times of iteration to converge to the near-optimal solution. However, the algorithm proposed in this paper, only needs about 90 times of iteration to converge to the optimal solution. The comparison of convergent features of the St70TSP problem optimized using different algorithms is shown in Fig. 2. In Fig. 2, line 1 shows the algorithm in this paper, the bold line 2 shows ACS algorithm, and line 3 shows the improved algorithm (Line 2 and 3 are all extracted from Reference [11]). It is shown by the experimental results that, both the ACS algorithm and the

(6) )(/)(})({1∑

===

N

kiiis afafaATP

(7) }{}{}{ 21

21

N

N

aPaPaPaaa

656565656565

Page 5: [IEEE Sixth International Conference on Machine Learning and Applications (ICMLA 2007) - Cincinnati, OH, USA (2007.12.13-2007.12.15)] Sixth International Conference on Machine Learning

improved algorithm [11] need more than 6000 times of iteration to converge to the near-optimal solution. However, our algorithm only needs about 450 times of iteration to converge to the optimal solution.

Fig .2 The comparison of convergent features of the optimized St70. The optimizing result comparison is shown in Table 1. In Table1, the optimum generation of ACS, reference

10, 11 are all extracted from this reference. Table 1 The experiment results comparison

R1 R2 N1 N2 N3 N4

Eil51 426 427 90 >2500 None >2500

St70 675 676 450 >3000 None >3000

In Table1, the R1 — TSP LIB results ; R2—Results in this paper; N1—Optimum of generation in this paper; N2—Optimum generation of ACS; N3—Optimum generation in Reference [10]; N4—Optimum generation in Reference [11]. 5. CONCLUSION

According to the latest achievement in research about actual ants, the algorithm proposed in this paper takes N optimization path as initial population, which is obtained by m scout ants using nearest-neighbor node searching principle and diffluence strategy. Based on it, further optimization is carried out adopting the genetic algorithms, such as self-adaptive crossover, mutation and inverse, etc., to obtain the optimal solution. The scout ant searching algorithm advanced in this paper is characterized by strong global-searching ability, and the genetic algorithm, such as mutation and inverse, etc., has the advantage of strong local optimization ability, so their advantages will complement each other if they are banded together, thus improves the convergence speed and optimization results markedly and provides satisfying results. However, it is still preliminary and groping to study ant colony algorithm according to the latest achievement of research about actual ants. A good many further work is needed. However, it will surely provide the researchers in this field with a kind of new concept, open up a new space for them and make the research in this field develop to wider fields.

References

[1] M.Dortgo, Vittorio Maniezzo, and Alberto Colorni, Ant system: optimization by a colony cooperating agents, IEEE Transactions on Systems, Man, and Cybernetics-Part B: Cybernetics, 1996, Vol.26 (1): 29-41. [2] M.Dortgo, L.M.Gambardella, Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans Evolutionary Computation, 1997, vol 1(l): 53- 66。 [3] T.Stutzle and H.H.Hoos, MAX-MIN ant system, Future Gener. Comput. Syst. 2001, 8, vol.16, No.8, pp.889-914. [4] Ming Chen, Qiang Lu. A Coevolutionary Model Based on Dynamic Combination of Genetic Algorithm and Ant Colony Algorithm, Parallel and Distributed Computing, Applications and Technologies,2005.PDCAT 2005. Sixth International Conference on, 941 – 944. [5] Zhu QingBao,Yang ZhiJun,Ant colony optimization algorithm based on mutation and dynamic pheromone updating,Journal of Software ,2004,Vol.15(2):1148- 1155. [6] Cheng-Fa Tsai and Chun-Wei Tsai, A new approach for solving large traveling salesman problem using evolution ant rules, In:Neural Networks, 2002. IJCNN '02. Proceedings of the 2002 International Joint Conference on,Honolulu, HI, USA : IEEE Press,2002,vol.2: 1540 – 1545. [7] Michael J. Greene; Deborah M. Gordon, Cuticular hydrocarbons inform task decisions, Nature,2003, Vol.423(6935):35. [8] Jackson DE, Holcombe WLM, Ratnieks FLW. Knowing which way to go - trail geometry gives polarity to ant foraging trails.Nature 2004,Vol.432(7019): 907-909. [9] Dussutour A, Fourcassie V, Helbing D, etl. Optimal traffic organization in ants under crowded conditions, Nature, 2004,VOL.428(6978):70-73 [10] Seung Gwan Lee,Tae Ung Jung, Tae Choong Chung, An effective dynamic weighted rule for ant colony system optimization, Proceedings of the 2001 Congress on Evolutionary Computation , NJ, USA, IEEE Press vol. 2:393-7. [11] Cheng-Fa Tsai and Chun-Wei Tsai, A new approach for solving large traveling salesman problem using evolution ant rules, Neural Networks, 2002. IJCNN '02. Proceedings of the 2002 International Joint Conference on,Honolulu, HI, USA , IEEE Press,vol.2: 1540 – 1545 1 Acknowledgments.

This research was supported by National Natural Science Foundation of china under Grant No. 60673102 and Natural Science Foundation of Jiangsu Province of china under Grant No.BK2006218

666666666666