5
A novel particle swarm optimizer with time-delay Tao Xiang a, * , Kwok-wo Wong b , Xiaofeng Liao a a College of Computer Science, Chongqing University, 174# Shazheng Street, Shapingba District, Chongqing 400044, China b Department of Electronic Engineering, City University of Hong Kong, Hong Kong Abstract Particle swarm optimization (PSO) is a relatively new population-based heuristic optimization technique. It has been widely applied to optimization problems for simplicity and capability of finding fairly good solutions rapidly. However, it may be trapped in local optima and fails to converge to global optimum. In this paper, the concept of time-delay is intro- duced into PSO to control the process of information diffusion and keep the particle diversity. Four time-delay schemes are proposed then. Experimental results verify their superiority both in robustness and efficiency. Conclusions are drawn in the end. Ó 2006 Elsevier Inc. All rights reserved. Keywords: Particle swarm optimization; Diversity; Time-delay 1. Introduction Particle swarm optimization (PSO), firstly introduced by Kennedy and Eberhart in 1995 [1,2], is a popula- tion-based stochastic optimization algorithm. It is inspired by the social behavior of animals such as bird flocking and fish schooling. In recent decade, much attention has been drawn to the mechanism and the vari- ants of PSO due to its simplicity in implementation and efficiency in tackling complex optimization problems. The PSO approach represents each potential solution of the target function by particles directly. Each par- ticle has its own position and velocity, with initial values set randomly. Then, all particles ‘‘fly’’ through the solution space and update their positions until they find the optimal solution. During this iterative process, each particle’s velocity is adjusted according to its own experience and social cooperation. Although PSO can find fairly good solutions rapidly [3], it may be trapped in local optima and fails to con- verge to global optimum. In order to overcome this problem, various schemes have been investigated to main- tain the particle diversity. Related work can be classified into two categories. In the first category, the topology of PSO is studied and several locally connected topological structures are suggested to keep the diversity [4–6]. In the second category, attention is paid on the velocity updating rule of the particles. Some improved velocity updating rules inspired from theories of other fields have been proposed. A dissipative PSO is developed 0096-3003/$ - see front matter Ó 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.08.049 * Corresponding author. E-mail addresses: [email protected] (T. Xiang), [email protected] (K.-w. Wong), xfl[email protected] (X. Liao). Applied Mathematics and Computation 186 (2007) 789–793 www.elsevier.com/locate/amc

A novel particle swarm optimizer with time-delay

Embed Size (px)

Citation preview

Page 1: A novel particle swarm optimizer with time-delay

Applied Mathematics and Computation 186 (2007) 789–793

www.elsevier.com/locate/amc

A novel particle swarm optimizer with time-delay

Tao Xiang a,*, Kwok-wo Wong b, Xiaofeng Liao a

a College of Computer Science, Chongqing University, 174# Shazheng Street, Shapingba District, Chongqing 400044, Chinab Department of Electronic Engineering, City University of Hong Kong, Hong Kong

Abstract

Particle swarm optimization (PSO) is a relatively new population-based heuristic optimization technique. It has beenwidely applied to optimization problems for simplicity and capability of finding fairly good solutions rapidly. However,it may be trapped in local optima and fails to converge to global optimum. In this paper, the concept of time-delay is intro-duced into PSO to control the process of information diffusion and keep the particle diversity. Four time-delay schemes areproposed then. Experimental results verify their superiority both in robustness and efficiency. Conclusions are drawn in theend.� 2006 Elsevier Inc. All rights reserved.

Keywords: Particle swarm optimization; Diversity; Time-delay

1. Introduction

Particle swarm optimization (PSO), firstly introduced by Kennedy and Eberhart in 1995 [1,2], is a popula-tion-based stochastic optimization algorithm. It is inspired by the social behavior of animals such as birdflocking and fish schooling. In recent decade, much attention has been drawn to the mechanism and the vari-ants of PSO due to its simplicity in implementation and efficiency in tackling complex optimization problems.

The PSO approach represents each potential solution of the target function by particles directly. Each par-ticle has its own position and velocity, with initial values set randomly. Then, all particles ‘‘fly’’ through thesolution space and update their positions until they find the optimal solution. During this iterative process,each particle’s velocity is adjusted according to its own experience and social cooperation.

Although PSO can find fairly good solutions rapidly [3], it may be trapped in local optima and fails to con-verge to global optimum. In order to overcome this problem, various schemes have been investigated to main-tain the particle diversity. Related work can be classified into two categories. In the first category, the topologyof PSO is studied and several locally connected topological structures are suggested to keep the diversity [4–6].In the second category, attention is paid on the velocity updating rule of the particles. Some improved velocityupdating rules inspired from theories of other fields have been proposed. A dissipative PSO is developed

0096-3003/$ - see front matter � 2006 Elsevier Inc. All rights reserved.

doi:10.1016/j.amc.2006.08.049

* Corresponding author.E-mail addresses: [email protected] (T. Xiang), [email protected] (K.-w. Wong), [email protected] (X. Liao).

Page 2: A novel particle swarm optimizer with time-delay

790 T. Xiang et al. / Applied Mathematics and Computation 186 (2007) 789–793

according to the self-organization of dissipative structure [7]. As passive congregation is an important biolog-ical force preserving swarm integrity, it is introduced into PSO [8]. The concept of mutation in Genetic Algo-rithm is also incorporated into PSO [9].

In this paper, the essence of keeping diversity is investigated. Based on this analysis, time-delay, which isprevalent in real world, is introduced into PSO (TPSO). Four kinds of time-delay schemes are proposed then.Their superiority over traditional PSO algorithms is verified in numerical simulation.

The rest of this paper is organized as follows. Section 2 introduces the standard PSO algorithm. The pro-posed scheme can be found in Section 3. Section 4 gives experimental configuration and results. Conclusionsare drawn in the end.

2. Standard PSO

In an n-dimensional search space, at the tth iteration, taking the ith particle into account, the position vec-tor and the velocity vector can be represented as X t

i ¼ ðxti;1; x

ti;2; . . . xt

i;nÞ and V ti ¼ ðvt

i;1; vti;2; . . . vt

i;nÞ, respectively.The velocity and position updating rule is given by (1) and (2), respectively:

vtþ1i;j ¼ vðvt

i;j þ c1r1ðpbestti;j � xt

i;jÞ þ c2r2ðgbesttj � xt

i;jÞÞ; ð1Þxtþ1

i;j ¼ xti;j þ vtþ1

i;j ð2Þ

for i = 1,2,3, . . . ,p, j = 1,2,3, . . . ,n, where p is the number of particles, and n is the dimension of search space. r1

and r2 are uniformly-distributed random numbers between 0 and 1, i.e. r1, r2 2 U(0, 1). c1 and c2 are positiveconstants referred as acceleration constants. v is the constriction factor used to control the magnitude of veloc-ity. In practice, c1 = c2 = 2.05 and v = 0.729 was recommended by Clerc and Kennedy [10]. pbesti,j is the valuealong the jth dimension of the best position found so far by the ith particle. gbestj is the value along the jthdimension of the global best position found so far by all particles in the swarm (gbest). Meanwhile, a maximumvelocity (Vmax) for each dimension of velocity is defined in order to clamp the excessive roaming of particles.

3. The proposed PSO with time-delay

In standard PSO, the particles tend to fly towards the gbest found so far by all particles. This social coop-eration helps them to discover fairly good solutions rapidly. However, it is exactly this instant social collabo-ration that makes particles stagnate on local optima and fails to converge at global optimum. Once a new gbestis found, it spreads over all particles immediately and so all particles are attracted to this position in the sub-sequent iterations until another better solution is found. Therefore, the stagnation of PSO is caused by the over-speed diffusion of newly found gbest. Most improved schemes target to deal with this problem. For example,the local version of PSO [4–6] employs the locally connected topology to prevent the instant spread of gbest toall particles; the dissipative PSO proposed in [7] and the hybrid PSO studied in [9] introduce perturbation to thediffusion of gbest; an additional term is added into (1) to alleviate the attraction of gbest in [8].

Since the essence of keeping diversity is to prevent or perturb the overspeed diffusion of gbest, a novel PSOwith time-delay (TPSO) is proposed in this paper. Time-delay, ubiquitous both in nature and society, is a sim-ple and efficient way to control the process of information diffusion. It is introduced into PSO to reflect theactual social behavior. In real world, time-delay, no matter large or small, is inevitable for information diffu-sion. Therefore, the perceivable information of individuals on distinct diffusion routes are different at sametime. Inspired by this, time-delay is applied to the velocity updating of PSO, i.e., once a new gbest is discov-ered, it spreads over all other particles with certain time-delay. On this occasion, the gbest used to update theparticle velocity may not be the latest one, and different particles may use different gbest in the same iteration.Particle diversity thus increases.

Four kinds of TPSO (TPSO-1, TPSO-2, TPSO-3, and TPSO-4) with different time-delay schemes are pro-posed. The velocity updating equation is rewritten as

vtþ1i;j ¼ vðvt

i;j þ c1r1ðpbestti;j � xt

i;jÞ þ c2r2ðpgbestti;j � xt

i;jÞÞ; ð3Þ

where pgbestti denotes the perceivable gbest of the ith particle in the tth iteration. The schemes of updating

pgbest in the TPSO are described as follows, respectively.

Page 3: A novel particle swarm optimizer with time-delay

T. Xiang et al. / Applied Mathematics and Computation 186 (2007) 789–793 791

(1) TPSO-1For the ith particle,ri = rand (0,1). If ri > q, update pgbesti to the latest gbest. Otherwise, keep pgbesti

unchanged.(2) TPSO-2

For the ith particle, si = rand(0,Tmax). Delay si steps to update pgbesti to the latest gbest.(3) TPSO-3

Store the gbestt of each step in an array gbest whose index indicates the iteration number t. For the ithparticle, use a pointer ptri to record the currently used pgbesti, i.e. pgbesti = gbest[ptri]. Increase ptri by acertain probability q.

(4) TPSO-4Store the gbestt of each step in an array gbest whose index indicates the iteration number t. Each particlehas a pointer ptri to record the currently used pgbesti, i.e. pgbesti = gbest [ptri]. In the tth iteration, ifi 6 t, increase ptri.

In the aforementioned schemes, rand(0,m) generates a uniformly-distributed random number between 0and m. q is a given threshold possibility. The increment of pointers is 1.

4. Numerical simulation

4.1. Experimental setup

The population size is set to 30 in all PSO algorithms. Six prevalent benchmarks described in [6] areemployed. Their configuration is listed in Table 1. The maximum number of PSO iterations is set to 5000in each running. In order to eliminate stochastic discrepancy, each algorithm will repeat 50 times. In the fol-

Table 1Benchmark configuration for simulations

Function Name Trait Dimension Domain Initialization range Minimum Threshold

f1 Sphere Unimodal 30 [�100,100] [50,100] 0 0.01f2 Rosenbrock Unimodal 30 [�30,30] [15,30] 0 100f3 Rastrigrin Multimodal 30 [�5.12,5.12] [2.56,5.12] 0 100f4 Griewank Multimodal 30 [�600,600] [300,600] 0 0.1f5 Ackley Multimodal 30 [�32,32] [16,32] 0 0.1f6 Schaffer Multimodal 2 [�100,100] [50,100] 0 0.00001

Table 2Success ratio in 50 times of running

f1 f2 f3 f4 f5 f6

SPSO 1 1 1 0.86 0.06 0.68TPSO-1 1 1 1 0.92 0.04 0.64TPSO-2 1 1 1 1 0.1 0.72TPSO-3 0 0.1 1 0 0 0.84TPSO-4 1 0.92 1 0.7 0.1 0.88

Table 3Mean global best fitness after 5000 iterations

f1 f2 f3 f4 f5 f6

SPSO 8.4775e�075 9.6903 45.131 0.060107 2.4719 0.0031091TPSO-1 9.7889e�073 15.927 45.052 0.032249 2.0896 0.0034977TPSO-2 1.8511e�072 7.418 44.176 0.027192 2.2827 0.0027205TPSO-3 0.3551 183.47 34.466 0.53788 0.2895 0.0013605TPSO-4 6.8619e�010 47.706 40.137 0.091556 2.657 0.0011659

Page 4: A novel particle swarm optimizer with time-delay

Table 4Mean number of iteration before success

f1 f2 f3 f4 f5 f6

SPSO 360.76 451.78 109.42 322.88 315.33 597.47TPSO-1 381 590.2 127.7 334.17 351 413.66TPSO-2 388 534.52 125.6 334.82 348.2 459.19TPSO-3 Inf 4486.4 357.28 Inf Inf 554.33TPSO-4 1329.2 1526.3 193.76 972.11 1037.8 384.07

4700 4750 4800 4850 4900 4950 50000

0.2

0.4

0.6

0.8

1x 10

–70

Iteration

Bes

t fitn

ess

TPSO–1TPSO–2TPSO–3TPSO–4PSO

0 1000 2000 3000 4000 50000

50

100

150

200

250

Iteration

Bes

t fitn

ess

TPSO–1TPSO–2TPSO–3TPSO–4PSO

0 1000 2000 3000 4000 500030

40

50

60

70

80

90

100

Iteration

Bes

t fitn

ess

TPSO–1TPSO–2TPSO–3TPSO–4PSO

0 1000 2000 3000 4000 50000

0.1

0.2

0.3

0.4

0.5

0.6

Iteration

Bes

t fitn

ess

TPSO–1TPSO–2TPSO–3TPSO–4PSO

0 1000 2000 3000 4000 50000

2

4

6

8

10

Iteration

Bes

t fitn

ess

TPSO–1TPSO–2TPSO–3TPSO–4PSO

0 1000 2000 3000 4000 50000

1

2

3

4

5

6

7

8x 10

–3

Iteration

Bes

t fitn

ess

TPSO–1TPSO–2TPSO–3TPSO–4PSO

Fig. 1. Best fitness trendlines of TPSO-1, TPSO-2, TPSO-3, TPSO-4 and SPSO on different benchmarks: (a) f1 (Sphere); (b) f2

(Rosenbrock); (c) f3 (Rastrigrin); (d) f4 (Griewank); (e) f5 (Ackley); (f) f6 (Schaffer).

792 T. Xiang et al. / Applied Mathematics and Computation 186 (2007) 789–793

Page 5: A novel particle swarm optimizer with time-delay

T. Xiang et al. / Applied Mathematics and Computation 186 (2007) 789–793 793

lowing experiments, the four kinds of TPSO proposed in this paper and the standard PSO (SPSO) are simu-lated. In TPSO-1 and TPSO-3, q is set to 0.5 while Tmax is chosen as 5 in TPSO-2.

4.2. Results and discussion

The experimental results are listed in Tables 2–4. Table 2 shows the success ratio, which indicates therobustness of all schemes. In the first two unimodal benchmarks (f1 and f2), TPSO schemes do not gainany advantages, The results of TPSO-3 and TPSO-4 sometimes are even worse than SPSO. However, the supe-riority of TPSO becomes apparent on multimodal functions (f3–f6). TPSO-2 always outperforms SPSO on allmultimodal functions.

The average gbest after 5000 iterations in 50 times of running is listed in Table 3. The results are consistentwith those shown in Table 2. SPSO performs better than almost all TPSO schemes on unimodal functions. Butwhen multimodal functions are considered, the situation is reversed. TPSO-2 outperforms SPSO on f3 to f6.The detailed trendlines of gbest during iterations are depicted in Fig. 1.

Table 3 gives the mean number of iterations needed to converge to the priori minimum within the thresholderror. As TPSO schemes introduce time-delay to decelerate the diffusion of gbest, they need more iterations toachieve the same search precision as SPSO does. However, the increase is not substantial for TPSO-1 andTPSO-2.

From the above analyses, TPSO-2 is found the best one among the four TPSO schemes. TPSO-3 performsbadly under all criteria due to its over-emphasized time-delay.

5. Conclusion

In this paper, the concept of time-delay is introduced into PSO to control the diffusion process of gbest andkeep the particle diversity. Four time-delay schemes that do not involve any complex topological structure ortime-consuming computation are proposed. They are compared to SPSO algorithm by numerical simulationof prevalent benchmarks. The results reveal that TPSO schemes with appropriate time-delay have their greatsuperiority in optimizing multimodal functions. In particular, TPSO-2 outperforms SPSO on most bench-marks without a substantial increase in the number of iterations. However, over-emphasized time-delay(e.g. TPSO-3) may deteriorate the performance of PSO.

References

[1] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks,Perth, Australia, 1995, pp. 1942–1948.

[2] R.C. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: Proceedings of the Sixth International Symposium onMicro Machine and Human Science (MHS’95), Nagoya, Japan, 1995, pp. 39–43.

[3] P.J. Angeline, Evolutionary optimization versus particle swarm optimization: Philosophy and performance differences, in:Proceedings of the 7th International Conference on Evolutionary Programming, London, UK, 1998, pp. 601–610.

[4] J. Kennedy, Small worlds and mega-minds: Effects of neighborhood topology on particle swarm performance, in: Proceedings of the1999 Congress on Evolutionary Computation (CEC’99), Washington, DC, USA, 1999, pp. 1931–1938.

[5] R. Mendes, J. Kennedy, J. Neves, The fully informed particle swarm: Simpler, maybe better, IEEE Transaction on EvolutionaryComputation 8 (3) (2004) 204–210.

[6] S. Janson, M. Middendorf, A hierarchical particle swarm optimizer and its adaptive variant, IEEE Transaction on Systems, Man, andCybernetics – Part B: Cybernetics 35 (6) (2005) 1272–1282.

[7] X. Xie, W. Zhang, Z. Yang, A dissipative particle swarm optimization, in: Proceedings of the 2002 Congress on EvolutionaryComputation (CEC’02), Hawaii, USA, 2002, pp. 1456–1461.

[8] S. He, Q.H. Wu, J.Y. Wen, J.R. Saunders, R.C. Paton, A particle swarm optimizer with passive congregation, Biosystems 78 (2004)135–147.

[9] A. Stacey, M. Jancic, I. Grundy, Particle swarm optimization with mutation, in: Proceedings of the 2003 Congress on EvolutionaryComputation (CEC’03), Canberra, Australia, 2003, pp. 72–79.

[10] M. Clerc, J. Kennedy, The particle swarm explosion, stability, and convergence in a multidimensional complex space, IEEETransaction on Evolutionary Computation 6 (1) (2002) 58–73.