4
A ModifIed Particle Swarm Optimization Algorithm Wen Shuhua Zhang Xueliang Li Hainan Liu Shuyang Wang Jiaying Taiyuan University of Science & Technology, Taiyuan Shanxi China, 030024 E-mail:[email protected] Abstract-A modified particle swarm optimization (MPSO) algorithm is presented based on the variance of the population's fitness. During computing, the inertia weight of MPSO is determined adaptively and randomly according to the variance of the population's fitness. And the ability of particle swarm optimization algorithm (PSO) to break away from the local optimum is greatly improved. The simulating results show that this algorithm not only has great advantage of convergence property over standard simple PSO, but also can avoid the premature convergence problem effectively. I. INRODUCTION Particle Swanm Optimization(PSO) is an evolutionary computation technique developed by Dr. Eberhart and Dr. Kennedy[l] in 1995. Similar to genetic algorithms (GA), PSO is a population based optimization tool. The particle swarm concept originated as a simulation of simplified social system. The original intent was to graphically simulate the choreography of bird of a bird flock or fish school. However, it was found that particle swarm model can be used as an optimizer. PSO simulates the behaviors of bird flocking. Suppose the following scenario: a group of birds are randomly searching food in an area. There is only one piece of food in the area being searched. All the birds do not know where the food is. But they know how far the food is during each search iteration. So what's the best strategy to find the food? The effective one is to follow the bird which is nearest to the food. PSO learned from the scenario and used it to solve the optimization problems. In PSO, each single solution is a "bird" in the search space, which is called "particle". All of particles have fitness values which are evaluated by the fitness function to be optimized, and have velocities which direct the flying of the particles. The particles are "flown" through the problem space by following the current optimum particles. PSO is initialized with a group of random particles (solutions) and then searches for optima by updating generations. In every iteration, each particle is updated by following two "best" values. The first one is the best solution (fitness) it has achieved so far. (The fitness value is also stored.) This value is called pbest. Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the population, which is conceptually similar to publicized knowledge, or a group norm or standard, and individuals seek to attain. This best value is a global best and called gbest. When a particle takes part of the population as its topological neighbors, the best value is a local best and is called lbest. However, unlike GA, PSO has no evolution operators such as crossover and mutation. Compared to GA, the advantages of PSO are that PSO is easy to implement and there are few parameters to adjust. PSO has been successfully applied in many areas: function optimization, artificial neural network training, fuzzy system control, and other areas where GA can be applied. As GA, PSO also has premature convergence, especially in complex multi-peak-search problems. The existing method to solve premature convergence is to increase the population of particle swarm. But this problem is not solved thoroughly, and the computation scale increases. In this paper, a novel method of selecting inertia weight randomly basing on the variance of the population's fitness is proposed, which can balance effectively the local and global searching ability of PSO and improves the convergence velocity and computation accuracy. II .THE STANDARD PSO AND) SOME IMPROVED PSO PSO is initialized with a group of random particles and then searches for optima by updating generations. In every iteration, each particle is updated by following two "best" values. The first one is the best solution it has achieved so far. This value is called pbest. Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the population. This best value is a global best and called gbest. After finding the two best values, the particle updates its velocity and positions with following formulas. v(t+l)= v(t)+cl randO[pbest(t)- present(t)]+ c2 rando[gbest(t)-present(t)] (1) present(t+l)= present(t)+ v(t+1) (2) vO is the particle's velocity; present( is the particle's current position(solution); pbestO and gbest() are defined as stated before; randO is a random number between (0,1); ci, c2 are learning factors, usually cl= c2=2.0. The pseudo code of the procedure is as follows: For each particle 0-7803-9422-4/05/$20.00 ©2005 IEEE 318

[IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - A Modified Particle

  • Upload
    doandan

  • View
    214

  • Download
    2

Embed Size (px)

Citation preview

Page 1: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - A Modified Particle

A ModifIed Particle Swarm OptimizationAlgorithm

Wen Shuhua Zhang Xueliang Li Hainan Liu Shuyang Wang JiayingTaiyuan University of Science & Technology, Taiyuan Shanxi China, 030024

E-mail:[email protected]

Abstract-A modified particle swarm optimization (MPSO)algorithm is presented based on the variance of thepopulation's fitness. During computing, the inertia weight ofMPSO is determined adaptively and randomly according tothe variance of the population's fitness. And the ability ofparticle swarm optimization algorithm (PSO) to break awayfrom the local optimum is greatly improved. The simulatingresults show that this algorithm not only has great advantageof convergence property over standard simple PSO, but alsocan avoid the premature convergence problem effectively.

I. INRODUCTION

Particle Swanm Optimization(PSO) is an evolutionarycomputation technique developed by Dr. Eberhart and Dr.Kennedy[l] in 1995. Similar to genetic algorithms (GA),PSO is a population based optimization tool. The particleswarm concept originated as a simulation of simplifiedsocial system. The original intent was to graphicallysimulate the choreography of bird of a bird flock or fishschool. However, it was found that particle swarm modelcan be used as an optimizer.PSO simulates the behaviors of bird flocking. Suppose

the following scenario: a group of birds are randomlysearching food in an area. There is only one piece of food inthe area being searched. All the birds do not know where thefood is. But they know how far the food is during eachsearch iteration. So what's the best strategy to find the food?The effective one is to follow the bird which is nearest to thefood. PSO learned from the scenario and used it to solve theoptimization problems. In PSO, each single solution is a"bird" in the search space, which is called "particle". All ofparticles have fitness values which are evaluated by thefitness function to be optimized, and have velocities whichdirect the flying of the particles. The particles are "flown"through the problem space by following the currentoptimum particles.PSO is initialized with a group of random particles

(solutions) and then searches for optima by updatinggenerations. In every iteration, each particle is updated byfollowing two "best" values. The first one is the bestsolution (fitness) it has achieved so far. (The fitness value isalso stored.) This value is called pbest. Another "best" valuethat is tracked by the particle swarm optimizer is the bestvalue, obtained so far by any particle in the population,

which is conceptually similar to publicized knowledge, or agroup norm or standard, and individuals seek to attain. Thisbest value is a global best and called gbest. When a particletakes part of the population as its topological neighbors, thebest value is a local best and is called lbest.

However, unlike GA, PSO has no evolution operatorssuch as crossover and mutation. Compared to GA, theadvantages of PSO are that PSO is easy to implement andthere are few parameters to adjust. PSO has beensuccessfully applied in many areas: function optimization,artificial neural network training, fuzzy system control, andother areas where GA can be applied.As GA, PSO also has premature convergence, especially

in complex multi-peak-search problems. The existingmethod to solve premature convergence is to increase thepopulation of particle swarm. But this problem is not solvedthoroughly, and the computation scale increases. In thispaper, a novel method of selecting inertia weight randomlybasing on the variance of the population's fitness isproposed, which can balance effectively the local and globalsearching ability of PSO and improves the convergencevelocity and computation accuracy.

II .THE STANDARD PSO AND) SOME IMPROVED PSO

PSO is initialized with a group of random particles andthen searches for optima by updating generations. In everyiteration, each particle is updated by following two "best"values. The first one is the best solution it has achieved sofar. This value is called pbest. Another "best" value that istracked by the particle swarm optimizer is the best value,obtained so far by any particle in the population. This bestvalue is a global best and called gbest.

After finding the two best values, the particle updates itsvelocity and positions with following formulas.

v(t+l)= v(t)+cl randO[pbest(t)- present(t)]+c2 rando[gbest(t)-present(t)] (1)present(t+l)= present(t)+ v(t+1) (2)

vO is the particle's velocity; present( is the particle'scurrent position(solution); pbestO and gbest() are defined asstated before; randO is a random number between (0,1); ci,c2 are learning factors, usually cl= c2=2.0.The pseudo code ofthe procedure is as follows:For each particle

0-7803-9422-4/05/$20.00 ©2005 IEEE318

Page 2: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - A Modified Particle

Initialize particleEND

Do_For each particle

Calculate fitness valueIf the fitness value is better than the best fitness

value (pbest) in historyset current value as the new pbest

_ End_ Choose the particle with the best fitness value of all

the particles as the gbest_For each particle

Calculate particle velocity according equation______Update particle position according equation

EndWhile maximum iterations or minimum error criteria is

not attained

Particles' velocities on each dimension are clamped to amaximum velocity vmax. If the sum of accelerations wouldcause the velocity on that dimension to exceed vmax, whichis a parameter specified by the user, then the velocity on thatdimension is limited to vmax.From the above case, we can learn that there are two key

steps when applying PSO to optimization problems: therepresentation of the solution and the fitness function. Oneof the advantages of PSO is that PSO take real numbers asparticles. It is not like GA, which needs to change to binaryencoding, or special genetic operators have to be used. Thesearching is a repeat process, and the stop criteria are thatthe maximum iteration number is reached or the minimumerror condition is satisfied.

There are not many parameter need to be adjusted in PSO.The typical range of the number of particles is 20-40.Actually for most of the problems 10 particles is largeenough to get good results. For some difficult or specialproblems, one can try 100 or 200 particles as well.Dimension of particles is determined by the problem to beoptimized. Range of particles is also determined by theproblem to be optimized, you can specify different rangesfor different dimension of particles. vmax determines themaxinum change one particle can take during one iteration.Usually we set the range of the particle as the vmax.Learning factors cl and c2 usually equal to 2. However,other settings were also used in different papers. But usuallycI equals to c2 and ranges from [0, 4]. The stop condition iseither the maximum number of iterations the PSO executeor the minimum error requirement, which depends on theproblem to be optimized.From the procedure, we can learn that PSO shares many

common points with GA. Both algorithms start with a groupof a randomly generated population, both have fitnessvalues to evaluate the population. Both update thepopulation and search for the optimum with randomtechniques. Both systems do not guarantee success.

However, PSO does not have genetic operators likecrossover and mutation. Particles update themselves withthe internal velocity. They also have memory, which isimportant to the algorithm.Compared with genetic algorithm (GA), the information

sharing mechanism in PSO is significantly different. In GA,chromosomes share information with each other. So thewhole population moves like a one group towards anoptimal area. In PSO, only gbest gives out the informationto others. It is a one-way information sharing mechanism.The evolution only looks for the best solution. Comparedwith GA, all the particles tend to converge to the bestsolution quickly even in the local version in most cases.

Later, Shi Y. improved the above algorithm by leadinginto an inertia weight w in order to improve PSOperformance, and developed the linear decreasingweight(LDW) strategy as the following.

v(t+l)=w v(t)+cl rand([pbest(t)- present(t)]+c2 rand([gbest(t)- present(t)] (3)

T -tW(t) = (Wini-Wend) max + WendT (4)

where, Tmax is the largest iteration time; wini is the initialinertia weight, we,,d is the inertia weight when Tmax isachieved. Researches have shown that the inertia weight winfluences the global and local searching ability of PSO. Thelarger the inertia weight w, the more powerful the globalsearching ability and the weaker the local searching ability.But the largest iteration time is predicted difficultly. Fuzzyadaptive particle swarm optimization can dynamicallybalance the global and local searching ability of PSO. Butits realization is difficult and its wide application isrestrained. Hybrid PSO can improve the local searchingability of PSO, but the global searching ability is weakened.Breeding PSO can accelerate the converging velocity a littlefor a single-peak function.

IH. ADAPTIVE INERTIA WEIGHT STRATEGY BASED ON THEVARIANCE OF THE POPULATION'S FITNESS

In the computing process of PSO, while one particle findsa current best location, other particles will "fly" to it fast.But if the best location is a local best, particle swarm willnot search again in the solution space. Hence the algorithmwill falls into the local best, this is called prematureconvergence. Whether premature convergence or globalconvergence, particles in the population will focus on onespecial location or some special locations. This depends onthe feature of the problem to be solved and the selection ofthe fitness function. We can track the state of particle swarmby researching the variance of the population's fitness usingstatistics theory.

Supposing S is the number of the particle population, f,is the fitness of i-th particle, fag is the current averagefitness of the population, a is the variance of the

319

Page 3: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - A Modified Particle

population's fitness, and cr is defined as the following["'.c,2 _ -t) (5)

i=1t ff =max{fi-gI=i-1,2-, ,S}, maxgfi -fg,} > 1 (6)

1, others

Here, f is a normalizing factor, and its fumction is tolimit cr. The multiple of the variance of the population'sfitness a reflects the converging level of all particles in thepopulation. The less a, the better the convergence of thepopulation. Otherwise, the larger a, the population will be ina random searching period. When the variance of thepopulation's fitness a equals zero, the solution obtained iseither the best solution or the expected optimization solution,and the population falls into a local best with the prematureconvergence occurring. In order to overcome the problem ofpremature convergence, there should be a strategy that canmake the algorithm to escape from the local best and tosearch in other areas continuously until the global bestsolution is found. According to (2) and (3), the next locationof a particle depends on its current location and its currentvelocity. The magnitude of the velocity determines themoving distance, and the direction of the velocitydetermines the moving direction. While the current velocityof a particle depends on three factors: its original velocity,the pbest and gbest. The global best gbest is the current bestsolution found so far. If premature convergence occurs, thegbest must be a local best solution. We can make particles toescape from the local best area and "fly" into othersearching areas by changing the velocity's magnitude anddirection. Hence the algorithm will possibly find newindividual best pbest and new global best gbest. Repeatedlyas this, the algorithm will possibly find the global bestsolution finally.

Since the varying law of the inertia weight is linear andsingle in PSO algorithm with linear decreasing inertiaweight(LDW), searching ability of it is adjusted verylimitedly, and the algorithm is not very feasible to complexproblems. In order to improve this, a modified and novelPSO algorithm with adaptive and random inertia weight isdeveloped. This algorithm selects the inertia weightadaptively and randomly according to the variance of thepopulation's fitness a as the following strategy.

{O.5+rand(/20,>10 (7)0.4 + rand( 2.0,c < 1.0

where rand( is a random number between (0,1). Thisstrategy has two important advantages. The first is theinfluence of the past velocity of a particle on the currentvelocity is random. The second is that the inertia weight isadjusted adaptively and randomly basing on the variance ofthe population's fitness. This leads to that the globalsearching and local searching abilities coordinate with eachother very well. And also the random inertia weight profits

the population being diverse.

JV.PERFORMANCE TEST

In this paper, Rosenbrock function and Rastrigin functionare adopted to test the performnance of MPSO and standardPSO. Rosenbrock function has one single peak, and itsvariables couple with each other very heavily. It is as thefollowing form.

n

f(X) [l00(x,l _xi)2 +(Xi 1)2], -10<X <1o (8)iil

Rastrigin function has multi peaks, and its variables areindependent each other. It has the following form.

f(X) = [xi-I0 cos(2=zx ) + 10], -5.12 < xi < 5.12 (9)i=l

Let n=10 in above two functions, and the particle numberof the population is 30, the maximum velocity Vmax is theupper limit of the initial range. Also let cl = c2 = 2.0. Fig. 1shows the performance comparison of convergence speed ofMPSO and the standard PSO. It is obvious that the modifiedPSO aigorithm-MPSO convergences faster than thestandard PSO.

V. CONCLUSIONS

PSO algorithm is a novel intelligent computationoptimization method. It is easy to implement. It is alsoeffective. But it should be improved further.

In this paper, a modified PSO algorithm with adaptiveand random inertia weight is presented. It selects adaptivelyand randomly the inertia weight according to the variance ofthe population's fitness, which realizes that the globalsearching ability and the local searching ability coordinatewith each other very well and keeps the population beingdiverse. The performance test results with Rosenbrockfunction and Rastrigin function shows that MPSOconvergences faster than the standard PSO.

REFERENCES

[1] Kennedy J, Eberhart R C. Particle swarm optimization [A]. Proceedingsof IEEE International Conference on Neural Networks[C]. Piscatawkay,NJ: IEEE Press, 1995. 1942-1948.

[2] Eberhart R C, Kennedy J. A new optimizer using particle swarm theory[A]. Proceedings of the Sixth International Symposium on MicroMachine and Human Science [C]. Nagoya, Japan: IEEE Press, 1995.39-43.

[3] Eberhart R C, Simpson P K, Dobbins R W. Computational IntelligencePC Tools [M]. Boston, MA: Academic Press Profession2al, 1996.

[4] Shi Y, Eberhart R C. A modified particle swarm optimizer [A].Proceedings of the IEEE Congress on Evolutionary Computation[C].Piscataway, NJ: IEEE Press, 1998. 303-308.

[5] Shi Y, Eberhart R C. Empirical study of particle swarm optimization[A]. Proceedings of the IEEE Congress on Evolutionary Computation[CjPiscataway, NJ: IEEE Press, 1999. 1945- 1950.

[6] Shi Y, Eberhart R C. Fuzzy adaptive particle swarm optimization[A].Proceedings of the IEEE Congress on Evolutionary Compu2tation [C].

320

Page 4: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - A Modified Particle

Seoul, Korea: IEEE Press, 2001. 101-106.[7] Shi, Y. and Eberhart, R. C. Parameter selection in particle swarm

optimization. Evolutionary Programming VII: Proc. EP 98 pp.591-600.Springer-Verlag, New York, 1998.

[8] Shi, Y. and Eberhart, R. C. A modified particle swarm optimizer.Proceedings of the IEEE International Conference on EvolutionaryComputation. pp. 69-73. IEEE Press, Piscataway, NJ, 1998

[9] Clerc M, Kennedy J. The particle swarm - explosion, stability, andconvergence in a multidimensional complex space [J]. IEEETransactions on Evolutionary Computation, 2002, 6 ( 1): 58 -73.

[10] Eberhart R C , Shi Y. Comparing inertia weight and constrictionfactors in particle swann optimization [A]. Proceedings of the IEEECongress on Evolutionary Computation [C]. San Diego,CA: IEEEPress, 2000. 84--88.

[11] Lii Zhensu, Hou Zhurong. Particle swarm optimization with adaptivemutation. ACTA ELECTRONICA SINICA, Vol.32, No.3,2004:416-420.

[12] Zhang Liping, Yu Huanjun, Chen Dezhao, Hu Shangxu. Analysis andimprovement of particle swarm optimization algorithm. Informationand Control, Vol.33, No.5, 2004: 513-517.

40

380

18

8a 10 2e 300

Iteration times400 588

a)

O*s

48

88 10 29 388

Iteration times408 588

b)

Fig.l. Comparison between SAIWPSO and PSOa) Rastrigin fimction b) Rosenbrock finction

321