4
Particle Swarm Optimization with Local Search Junying Chen, Zheng Qin, Yu Liu, and Jiang Lu Department of Computer Science Xian Jiaotong University Xian 710049, P.R.China E-mail: [email protected] Abstract-In this paper, we propose a hybrid algorithm of particle swarm optimization and local search (PSO-LS). In PSO-LS, each particle has a chance of self-improvement by applying local search algorithm before it communicates information with other particles in the swarm. Then we modify our basic PSO-LS by choosing specific good particles as initial solutions for local search. The comparative experiments were made between PSO-LS, modified PSO-LS and PSO with linearly decreasing inertia weight (PSO-LDW) on three benchmark functions. Results show hybnrd algorithms of combining particle swarm optimization with local search techniques outperform PSO-LDW. I. INTRODUCTION Particle Swarm Optimization (PSO), a new population-based evolutionary computation technique , was first introduced in 1995 by Eberhart and Kennedy [1]. It has been reported that PSO has good performance in solving real-valued optimization problems [2; 3; 4]. They may efficiently locate the basins of the optima of the optimization problems. However, initial PSOs suffer a serious problem that they are often unable to explore the basins quickly and efficiently so as to discover the optimum in the basins. Eberhart and Shi introduced a time decreasing inertia factor to balance the global wide range exploitation and the local exploration abilities of the swarm [3]. In [5; 6], simulated annealing was combined with particle swarm optimization to improve local search ability of the swarm. In [7], Tabu technique was combined with PSO to improve the performance of PSOs. To cope with this problem, we proposed a local search-embedded particle swarm optimization algorithm. In our algorithm, we used hill-climbing (HC) algorithm to execute local search to find better solutions in the neighborhood of the current solution produced by PSO in every iteration. Particle swarm optimization algorithm is responsible for the search of the basins and local search algorithm for accurate solutions within the basins. Both particle swarm optimization and hill climbing algorithms are nature-based stochastic computational techniques; the former is based on simulation of social behavior and the latter is based on natural evolution. Our approach incorporates hill-climbing algorithms into particle swarm optimization to improve the performance of PSO. The ideas behind our approach are: 1) The idea of the Richard Dawkin's term "meme" is introduced into PSO. Dawkin's notion of a meme is defined as a unit of cultural evolution that is capable of local refinements. That means individual could improve itself before it communicates information with the population it belongs to. 2) Hybrid algorithms combining Genetic algorithm with local search algorithms (also named Memetic algorithms) have been proven to be very successful in solving many optimization problems [8; 9; 10]. Hence, it is assumed that incorporating PSO with local search technique is a meaningful research topic. II. BASIC PARTICLE SWARM OPTIMIZATION In PSO, positions of N particles are candidate solutions to the D-dimensional problem, and the moves of the particles are regarded as the search process of better solutions. The position of the ith particle at t iteration is represented by Xi(t)=(xJ,xi2,,,xID), and its velocity is represented by Vi(t)=(viJ,vi2,,,viD). During the search process the particle successively adjusts its position according to two factors: one is the best position found by itself (pbest), denoted by Pi=(p0i,pi2,,,piD); the other is the best position found so far by its neighbors (gbest),denoted by Pg=(pg,pg2,,,pgD). The neighbors can be either the whole population (global version) or a small group specified before run (local version). The velocity update equation (1) and position update equation (2) are described as follows. Vi(t) = w* Vi(t-1) + cl*rando*(Pi -Xi(t-l)) + c2 *rand(*(Pg -Xj(t-l )) Xi(t) = xi(t-1) + Vi(t) (1) (2) where w is inertia weight which balances the global exploitation and local exploration abilities of the particles, cl and c2 are acceleration constants, randO are random values between 0 and 1. The velocities of the particles are limited in [Vmin, Vmax]D. If smaller than Vmin, an element of the velocity is set equal to Vmin; if greater than Vmax, and then set equal to Vmax. In this paper, this version of PSO is referred to as linearly decreasing inertia weight method (PSO-LDW). III. PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH 0-7803-9422-4/05/$20.00 C2005 IEEE 481

[IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Particle Swarm Optimization

Embed Size (px)

Citation preview

Particle Swarm Optimization with Local SearchJunying Chen, Zheng Qin, Yu Liu, and Jiang Lu

Department of Computer ScienceXian Jiaotong UniversityXian 710049, P.R.China

E-mail: [email protected]

Abstract-In this paper, we propose a hybrid algorithm ofparticle swarm optimization and local search (PSO-LS). InPSO-LS, each particle has a chance of self-improvement byapplying local search algorithm before it communicatesinformation with other particles in the swarm. Then we modifyour basic PSO-LS by choosing specific good particles as initialsolutions for local search. The comparative experiments weremade between PSO-LS, modified PSO-LS and PSO withlinearly decreasing inertia weight (PSO-LDW) on threebenchmark functions. Results show hybnrd algorithms ofcombining particle swarm optimization with local searchtechniques outperform PSO-LDW.

I. INTRODUCTION

Particle Swarm Optimization (PSO), a newpopulation-based evolutionary computation technique , wasfirst introduced in 1995 by Eberhart and Kennedy [1]. It hasbeen reported that PSO has good performance in solvingreal-valued optimization problems [2; 3; 4]. They mayefficiently locate the basins of the optima of theoptimization problems. However, initial PSOs suffer aserious problem that they are often unable to explore thebasins quickly and efficiently so as to discover the optimumin the basins. Eberhart and Shi introduced a time decreasinginertia factor to balance the global wide range exploitationand the local exploration abilities of the swarm [3]. In [5; 6],simulated annealing was combined with particle swarmoptimization to improve local search ability of the swarm. In[7], Tabu technique was combined with PSO to improve theperformance of PSOs. To cope with this problem, weproposed a local search-embedded particle swarmoptimization algorithm. In our algorithm, we usedhill-climbing (HC) algorithm to execute local search to findbetter solutions in the neighborhood of the current solutionproduced by PSO in every iteration. Particle swarmoptimization algorithm is responsible for the search of thebasins and local search algorithm for accurate solutionswithin the basins.

Both particle swarm optimization and hill climbingalgorithms are nature-based stochastic computationaltechniques; the former is based on simulation of socialbehavior and the latter is based on natural evolution. Ourapproach incorporates hill-climbing algorithms into particleswarm optimization to improve the performance of PSO.The ideas behind our approach are: 1) The idea of the

Richard Dawkin's term "meme" is introduced into PSO.Dawkin's notion of a meme is defined as a unit of culturalevolution that is capable of local refinements. That meansindividual could improve itself before it communicatesinformation with the population it belongs to. 2) Hybridalgorithms combining Genetic algorithm with local searchalgorithms (also named Memetic algorithms) have beenproven to be very successful in solving many optimizationproblems [8; 9; 10]. Hence, it is assumed that incorporatingPSO with local search technique is a meaningful researchtopic.

II. BASIC PARTICLE SWARM OPTIMIZATION

In PSO, positions ofN particles are candidate solutions tothe D-dimensional problem, and the moves of the particlesare regarded as the search process of better solutions. Theposition of the ith particle at t iteration is represented byXi(t)=(xJ,xi2,,,xID), and its velocity is represented byVi(t)=(viJ,vi2,,,viD). During the search process the particlesuccessively adjusts its position according to two factors:one is the best position found by itself (pbest), denoted byPi=(p0i,pi2,,,piD); the other is the best position found so farby its neighbors (gbest),denoted by Pg=(pg,pg2,,,pgD). Theneighbors can be either the whole population (global version)or a small group specified before run (local version). Thevelocity update equation (1) and position update equation (2)are described as follows.

Vi(t) = w* Vi(t-1) + cl*rando*(Pi -Xi(t-l)) +c2*rand(*(Pg -Xj(t-l ))

Xi(t) = xi(t-1) + Vi(t)

(1)

(2)

where w is inertia weight which balances the globalexploitation and local exploration abilities of the particles,cl and c2 are acceleration constants, randO are randomvalues between 0 and 1. The velocities of the particles arelimited in [Vmin, Vmax]D. If smaller than Vmin, an elementof the velocity is set equal to Vmin; if greater than Vmax,and then set equal to Vmax. In this paper, this version ofPSO is referred to as linearly decreasing inertia weightmethod (PSO-LDW).

III. PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH

0-7803-9422-4/05/$20.00 C2005 IEEE481

A. Local Search Algorithm-Hill ClimbingIn [11], hill climbing algorithms were used to "local

exploration" for obtaining the local optimum while geneticalgorithm was responsible for "global exploitaion". Becausehill climbing algorithms are easier to implement and offermore flexibility in the motion of particles, we also usedhill-climbing algorithm to start our study for methods ofparticle swarm optimization combined with local search.Hill climbing algorithm uses random local search todeternine the direction and size of each new step. Theterminology in the literature is not uniform. In our study hillclimbing is the algorithm presented in the followingProcedure HC.

Procedure HC1. Initialization: specify the neighborhood ftinction 4band select an initial solution Io;2. Set the loop counter k=1;3. While iteration k<K3.1 Generate Ik E according to 0;

3.2 Calculate the change in objective function valueA = C(IK) -C(Ik-1 );3.3 Accept solution Ik if A < O;3.4 k=k+1;

4. EndEnd Procedure

In our algorithm, q was designed to generate a newcandidate solution as in [5], described as follows.

New solution=current solution + r*(1-2*rand() ) (3)

where r represents the changing range of original particles.RandO is random values between 0 and 1.

B. PSO-LSIn PSO-LS, particle swarm optimization algorithm was

applied for global solution search. The particle had a chanceof self-improvement by applying local search algorithmbefore it adjusted its position according to the best positionfound by itself and the best position found by its neighbors.The pseudo code ofPSO-LS is as follows.

Begin PSO-LS1.Initialize the parameters, i.e., maxIteration, population

size, and K;2. Randomly generating PO;3. Calculate the fitness for each point in PO;4. SetPiandPg;5. While t< maxIteration

5.1 For every particle in the swarm, do Local-Search(as shown in Procedure HC;

5.2 Update Pi and Pg;5.3 Update the velocity according to formula (1);5.4 Limit the velocity in [Vmin, Vmax]D;5.5 Update the position according to formula (2);5.6 Evaluate the fitness for each particle in Pt;

5.7 If needed, update Pi and Pg;5.8 t= t+1;

6. End whileEnd PSO-LSIn PSO-LDW, particles adjust their positions only

according to swarm information. PSO has a strong abilityfinding the most optimistic result and a disadvantage oflocal minimum. LS have a strong ability finding the localoptimistic result. During the local search, each particle willsearch its own local optimum. It may not be able to go to theglobal optimum due to the limits on search space and theprobability of falling into a local basin. After searching forthe local optimum, the particle is regarded to be matured.PSO methods excite it from a local optimum point so thatthere will be a greater chance to reach a global optimum. InPSO-LS, we tried to strike a balance between particle swarmsearch and local search using the parameters K in localsearch. Moreover, local search was applied to all particles inevery iteration. We also used another parameter T in ourmodified PSO-LS, where local search was applied to somespecific good solutions in every T generations.

C. ModifiedPSO-LSIn the modified PSO-LS (MPSO-LS), local search was

not applied to all particles in the current swann but appliedto pbest (the best position found by each particle) in every Titerations, r used in equation (3) linearly decreased as theiteration increased. The other parts of MPSO-LS were thesame as those of PSO-LS. The pseudo code of MPSO-LS isas follows.

Begin MPSO-LS1.Initialize the parameters, i.e., population size,

maxIteration, Tand K;2. Randomly generating PO;3. Calculate the fitness for each point in PO;4. SetPiandPg;5. While t< maxIteration

5.1 IfMOD(t, T)=-0 then5.2 For every pbest found so far, do Local-SearchO as

shown in Procedure HC;5.3 Update Piand Pg;5.4 End If5.5 Update the velocity according to formula (1);5.6 Limit the velocity in [Vmin, Vmax]D;5.7 Update the position according to formula (2);5.8 Evaluate the fitness for each particle in Pt;5.9 If needed, update Piand Pg;5.10 t t+1;

6. End whileEnd PSO-LSMPSO-LS improved PSO-LS in two aspects: on the one

hand, MPSO-LS executed local search in every T (T>1)iterations, so MPSO-LS saved computation time thanPSO-LS for less local search. On the other hand, localsearch in MPSO-LS is more efficiently than that in PSO-LS

482

because MPSO-LS executed local search on pbest and localsearch on good particle contributing more to theperformance improvement ofPSO than a bad one.

rv. EXPERIMENTS

A. Local Search Algorithm-Hill ClimbingThree well-known benchmark functions with asymmetric

initial range settings are selected as testing functions. Thefirst function is the Rosenbrock function described byequation (4)

n

f (X) = Z (1 00(X+ 2

-X: )2 + (Xi 1)2 )i=1

(4)

Where x = [xl, x, ..., x, ] is a n-dimensional real-valuedvector. The second function is the generalized Rastrigrinfunction described by equation (5)

n

f2(x) = Z (x7 -lO cos(2ffx. ) + 10)i=1

(5)

The third function is the generalized Griewank functiondescribed by equation (6)

(X)= -I Xi n COS (6)

B. Comparison with PSO-LDWPSO-LDW is the PSO with a weight w decreasing linearly

between 0.9 and 0.4, values recommended in PSO literature[12]. The learning rates were cj=c2=2. PSO-LS is ourparticle swarm optimizer with local search. For decreasingthe computation time spent by local search, only a smallnumber of neighbors of the current solution were examined,K=4 was verified to be appropriate, r was initially set 0.1times the search space, and decreased its value by 0.98 timesin every iteration. In our MPSO-LS, K=5 was proposedthrough many experiments, r was initially set 0.1 times thesearch space and linearly decreased to 0.005 times thesearch space as iteration increased, as well as T=4. Forpurpose of comparisons, all other parameters were assignedby the same setting as in literature [13]. For each function,three dimensions were tested: 10, 20 arid 30;correspondingly, the maximum numbers of generations wereset as 1000, 1500 and 2000. For investigation of thescalability of PSO algorithms, three population sizes 20, 40and 80 were used for each function with differentdimensions. For each experimental setting, 30 runs of thealgorithm were conducted. Vmin was set equal to Xmin;Vmax equal to Xmax. The search space and initializationrange for each of the test functions were listed in Table 1. Inorder to give right indications of relative perfornance, anasymmetric initialization was adopted according to literature[3].

TABLE I.

SEARCH SPACE AND INITIALIZATION RANGE

Function Search Space Initialization Rangesfi(x) [-100, 100] [15, 30]f2(x) [-10, 10] [2.56, 5.12]f3x -600, 600] 300, 6001

TABLE II

MEAN FITNESS VALUES FOR THE ROSENBROCK FUNCTION

Pop.Size. Dim. Iters PSO-LS MPSO-LS PSO-LDW10 1000 94.9614 6.9822 98.1950

20 20 1500 106.5846 66.3943 201.915730 2000 221.7635 101.9312 306.306410 1000 44.8706 4.6763 68.5730

40 20 1500 142.7655 38.4946 175.220730 2000 108.6016 77.0609 286.796810 1000 31.3372 3.7649 40.5235

80 20 11500 91.7543 29.4940 91.922430 2000 82.2594 78.7998 [96.33 16

TABLE III

MEAN FITNESS VALUES FOR THE RASTRIGRIN FUNCTION

Pop.Size. Dim. Iters PSO-LS MPSO-LS PSO-LDW10 1000 5.7598 5.0312 5.9518

20 20 1500 21.8615 22.5937 22.935730 2000 48.9591 47.9527 50.604110 1000 3.3173 3.4252 3.5531

40 20 1500 17.4682 18.7206 18.248530 2000 40.1642 39.9494 40.881410 1000 2.5211 2.2897 2.5474

80 20 1500 13.1108 14.9165 13.736230 2000 30.4163 28.8675 31.7229

TABLE IV

MEAN FITNESS VALUES FOR THE GRIEWANK FUNCTION

Pop.Size. Dim. Iters PSO-LS MPSO-LS PSO-LDW10 1000 0.1208 0.0977 0.0919

20 20 1500 0.0212 0.0269 0.030230 2000 0.0126 0.0143 0.017910 1000 0.0960 0.1012 0.0860

40 20 1500 0.0312 0.0283 0.027930 2000 0.0119 0.0109 0.014310 1000 0.0648 0.0748 0.0772

80 20 1500 0.0374 0.0250 0.030030 2000 0.0120 0.0120 0.0301

Tables 2, 3 and 4 listed the mean fitness values of the bestsolutions for the 30 trails on Rosenbrock, Rastrigrin, andGriewank functions by means of PSO-LS, MPSO-LS andPSO-LDW respectively. From Table 2, PSO-LS performedbetter than PSO-LDW and MPSO-LS outperformed bothPSO-LS and PSO-LDW. Tables 3 and 4 showed thatMPSO-LS, PSO-LS could achieve similar performance toLDW-PSO on Rastrigrin, and Griewank functions.From the experimental results listed in Table2, 3 and 4,

we show that the performance of the basic PSO-LS can beimproved by applying local search to not all particles but

483

only specific good ones. MPSO-LS executed local search on [13]Y. Shi, R.C. Eberhart.: Empirical study of particle swarm optimization.some specific particles in every several iterations, while Proceedings ofthe IEEE Congress on Evolutionary Computation (CECsome pecifcpariclesin evry seeral teratons, hile 1999), Piscataway, NJ (1999) 1945-1950PSO-LS executed local search on all the particles in everyiteration. So MPSO-LS had a less computation time thanPSO-LS had.

V. CONCLUSIONS

In this paper, we proposed a hybrid algorithm which wascomposed of particle swarm optimization and hill climbinglocal search named as PSO-LS. PSO-LS integrated theself-improvement mechanisms from Memetic algorithms.The experimental results showed that the algorithm had acompetitive potential for optimization fimction.The current form of PSO-LS and our modified PSO-LS

still need to be improved by designing hybrid methods ofcombining PSO with LS. The impact of local search strategyon particle swarm optimization is further studied in thefuture.

REFERENCES

[1] J. Kennedy, R.C. Eberhart.: Particle swarm optimization. ProceedingsofIEEE International Conference on Neural Networks, Piscataway, NJ(1995) 1942-1948

[2] Eberhart, R.C., Kennedy.: A New Optimizer Using Particles SwarmTheory. Proceedings of the Sixth Intemational Symposium on MicroMachine and Human Science, (Nagoya, Japan), IEEE Service Center,Piscataway, NJ (1995) 39-43

[3] Y. Shi, R.C. Eberhart.: A modified particle swarm optimizer.Proceedings of the IEEE Congress on Evolutionary Computation,Piscataway, NJ (1998) 69-73

[4] Eberhart, R.C., Shi, Y.: Comparison between genetic algorithms andparticle swarm optimization. Evolutionary Programming VII:Proceedings of the Seventh Annual Conference on EvolutionaryProgramming Springer-Verlag, Berlin, San Diego, CA (1998)

[5] Xi-Huai, Wang. Jun-Jun, Li,; Hybrid particle swarm optimization withsimulated annealing. Proceedings of2004 International Conference onMachine Learning and Cybernetics, ShangHai, ( 2004) 2402 - 2405

[6] Xia WJ,Wu ZM, Zhang W, et al.:A new hybrid optimization algorithmfor the job-shop scheduling problem. Prodeedings of the 2004American control conference. ( 2004) 5552-5557

[7] Cui ZH, Zeng JC, Cai XJ.:A new stochastic particle swarm optimizer.Proceedings ofthe 2004 Congress on Evolutionary Computation ( 2004)316-319

[8] Dengiz, B.; Altiparmak, F.; Smith, A.E.; Local search genetic algorithmfor optimal design of reliable networks. Transactions on EvolutionaryComputation (1997) 179-188

[9] Folino, G.; Pizzuti, C.; Spezzano, G.;Parallel hybrid method for SATthat couples genetic algorithms and local search. IEEE Transactions onEvolutionary Computation (2001) 323 - 334

[10]Chu Kwong Chak; Gang Feng;Accelerated genetic algorithms:combined with local search techniques for fast and accurate globalsearch. IEEE International Conference on Evolutionary Computation(1995)

[1 l]Renders, J.-M.; Bersini, H.; Hybridizing genetic algorithms withhill-climbing methods for global optimization: two possible ways. IEEEWorld Congress on Computational Intelligence, Proceedings of theFirst IEEE Conference on Evolutionary Computation (1994) 312 - 317

[12]Y. Shi, R. C. Eberhart.: Parameter selection in particle swarmopfimization. Evolutionary Programming VII:Proceedings of theSeventh Annual Conference on Evolutionary Programming, New York(1998) 591-600

484