4
2012 - International Conference on Emerging Trends in Science, Engineering and Technology 86 A Fast and Inexpensive Particle Swarm Optimization for Drifting Problem-Spaces Zubin Bhuyan Department of Computer Science and Engineering Tezpur University, Tezpur, India [email protected] Sourav Hazarika Department of Computer Science and Engineering Tezpur University, Tezpur, India [email protected] AbstractParticle Swarm Optimization is a class of stochastic, population based optimization techniques which are mostly suitable for static problems. However, real world optimization problems are time variant, i.e., the problem space changes over time. Several researches have been done to address this dynamic optimization problem using Particle Swarms. In this paper we probe the issues of tracking and optimizing Particle Swarms in a dynamic system where the problem-space drifts in a particular direction. Our assumption is that the approximate amount of drift is known, but the direction of the drift is unknown. We propose a Drift Predictive PSO (DriP-PSO) model which does not incur high computation cost, and is very fast and accurate. The main idea behind this technique is to use a few stagnant particles to determine the approximate direction in which the problem- space is drifting so that the particle velocities may be adjusted accordingly in the subsequent iteration of the algorithm. Keywords- pso, dynamic exploration, drifting problem-space I. INTRODUCTION Swarm intelligence may be defined as the collective behavior of simple rule-following agents in a decentralized system, where the overall behavior of the entire system appears intelligent to an external observer. In nature, this kind of behavior is seen in bird flocks, fish schools, ant colonies and animal herds. Given a large space of possibilities, a population of agents is often able solve difficult problems by finding multivariate solutions or patterns through a simplified form of social interaction [1]. Particle swarm optimization was first put forward by Kennedy and Eberhart in 1995 [2, 3]. The PSO algorithm exhibits all common evolutionary computation characteristics, viz., initialization with a random population, searching for optima by updating generations, and updating generations based on previous ones. It has been implemented with different approaches for a wide range of generic problems, as well as for case-specific applications focused on a precise requirement [4, 5, 6, 7]. However, almost all practical problems are time-varying or dynamic, i.e., the environment and the characteristics of the global optimum changes over time. More formally, a dynamic system is one where the system changes its state in a repeated or non-repeated manner. In such cases a standard PSO might not give the most optimal results. Also, there are several ways in which a system may change over time. The changes may occur periodically in some predefined sequence or in random fashion. References [8, 9] define three kinds of dynamic systems. First, the location of the optimum value in the problem space may change. Second, the location can remain constant but the optimum value may vary. Third, both the location and the value of the optimum may vary. II. BACKGROUND A. Standard Particle Swarm Optimization PSO is initialized with a population of random solutions called particles. Each particle moves about, or flies, in the given problem space with a velocity which keeps on varying continuously according to its own flying experience and other particles as well. In a D-dimension space the location of the i th particle is represented as X i = (x i1 ,…, x id ,…, x iD ), and velocity for the i th particle is represented as V i = (v i1 ,…, v id , …, v iD ). The best previous position of the i th particle is called the pbest i . The best pbest among all the particles is called the gbest. Equations (1a) and (1b) are used to update the particles‟ position and velocity. = × + 1 × () × + 2 × () × ( ) (1a) = + (1b) Equation (1a) calculates the new velocity of the particles based on its previous velocity ( ), location where the particle has achieved its best value (pbest i or ), location where the highest of all pbest has been achieved (gbest or ), w is inertia weight, c 1 and c 2 are cognitive and social acceleration constants, and rand() is a random number generator function The new position of each particle is then updated using equation (1b). In both the equations subscript d indicates the d th dimension. If the current fitness of any particle is better than its pbest, then the value of the pbest will be replaced by the current solution. Again, if that pbest is better than the existing gbest,

A Fast and Inexpensive Particle Swarm Optimization for Drifting Problem-Spaces

Embed Size (px)

DESCRIPTION

Particle Swarm Optimization is a class of stochastic, population based optimization techniques which are mostly suitable for static problems. However, real world optimization problems are time variant, i.e., the problem space changes over time. Several researches have been done to address this dynamic optimization problem using Particle Swarms. In this paper we probe the issues of tracking and optimizing Particle Swarms in a dynamic system where the problem-space drifts in a particular direction. Our assumption is that the approximate amount of drift is known, but the direction of the drift is unknown. We propose a Drift Predictive PSO (DriP-PSO) model which does not incur high computation cost, and is very fast and accurate. The main idea behind this technique is to determine the approximate direction using a small number of stagnant particles in which the problem-space is drifting so that the particle velocities may be adjusted accordingly in the subsequent iteration of the algorithm.

Citation preview

Page 1: A Fast and Inexpensive Particle Swarm Optimization for Drifting Problem-Spaces

2012 - International Conference on Emerging Trends in Science, Engineering and Technology

86

A Fast and Inexpensive Particle Swarm Optimization

for Drifting Problem-Spaces

Zubin Bhuyan

Department of Computer Science and Engineering

Tezpur University,

Tezpur, India

[email protected]

Sourav Hazarika

Department of Computer Science and Engineering

Tezpur University,

Tezpur, India

[email protected]

Abstract— Particle Swarm Optimization is a class of stochastic,

population based optimization techniques which are mostly

suitable for static problems. However, real world optimization

problems are time variant, i.e., the problem space changes over

time. Several researches have been done to address this dynamic

optimization problem using Particle Swarms. In this paper we

probe the issues of tracking and optimizing Particle Swarms in a

dynamic system where the problem-space drifts in a particular

direction. Our assumption is that the approximate amount of

drift is known, but the direction of the drift is unknown. We

propose a Drift Predictive PSO (DriP-PSO) model which does not

incur high computation cost, and is very fast and accurate. The

main idea behind this technique is to use a few stagnant particles

to determine the approximate direction in which the problem-

space is drifting so that the particle velocities may be adjusted

accordingly in the subsequent iteration of the algorithm.

Keywords- pso, dynamic exploration, drifting problem-space

I. INTRODUCTION

Swarm intelligence may be defined as the collective behavior of simple rule-following agents in a decentralized system, where the overall behavior of the entire system appears intelligent to an external observer. In nature, this kind of behavior is seen in bird flocks, fish schools, ant colonies and animal herds. Given a large space of possibilities, a population of agents is often able solve difficult problems by finding multivariate solutions or patterns through a simplified form of social interaction [1].

Particle swarm optimization was first put forward by Kennedy and Eberhart in 1995 [2, 3]. The PSO algorithm exhibits all common evolutionary computation characteristics, viz., initialization with a random population, searching for optima by updating generations, and updating generations based on previous ones. It has been implemented with different approaches for a wide range of generic problems, as well as for case-specific applications focused on a precise requirement [4, 5, 6, 7].

However, almost all practical problems are time-varying or dynamic, i.e., the environment and the characteristics of the global optimum changes over time. More formally, a dynamic system is one where the system changes its state in a repeated or non-repeated manner. In such cases a standard PSO might not give the most optimal results. Also, there are several ways

in which a system may change over time. The changes may occur periodically in some predefined sequence or in random fashion. References [8, 9] define three kinds of dynamic systems. First, the location of the optimum value in the problem space may change. Second, the location can remain constant but the optimum value may vary. Third, both the location and the value of the optimum may vary.

II. BACKGROUND

A. Standard Particle Swarm Optimization

PSO is initialized with a population of random solutions

called particles. Each particle moves about, or flies, in the

given problem space with a velocity which keeps on varying

continuously according to its own flying experience and other

particles as well. In a D-dimension space the location of the ith

particle is represented as Xi = (xi1,…, xid,…, xiD), and velocity

for the ith

particle is represented as Vi = (vi1,…, vid, …, viD).

The best previous position of the ith

particle is called the

pbesti. The best pbest among all the particles is called the

gbest.

Equations (1a) and (1b) are used to update the particles‟

position and velocity.

𝑣𝑖𝑑 = 𝑤 × 𝑣𝑖𝑑 + 𝑐1 × 𝑟𝑎𝑛𝑑() × 𝑝𝑖𝑑 − 𝑥𝑖𝑑 + 𝑐2 × 𝑟𝑎𝑛𝑑() × (𝑝𝑔𝑑 − 𝑥𝑖𝑑 ) (1a)

𝑥𝑖𝑑 = 𝑥𝑖𝑑 + 𝑣𝑖𝑑 (1b)

Equation (1a) calculates the new velocity of the particles

based on its previous velocity (𝑣𝑖𝑑 ), location where the particle

has achieved its best value (pbesti or 𝑝𝑖𝑑 ), location where the

highest of all pbest has been achieved (gbest or 𝑝𝑔𝑑 ), w is

inertia weight, c1 and c2 are cognitive and social acceleration

constants, and rand() is a random number generator function

The new position of each particle is then updated using

equation (1b). In both the equations subscript d indicates the

dth

dimension.

If the current fitness of any particle is better than its pbest,

then the value of the pbest will be replaced by the current

solution. Again, if that pbest is better than the existing gbest,

Page 2: A Fast and Inexpensive Particle Swarm Optimization for Drifting Problem-Spaces

2012 - International Conference on Emerging Trends in Science, Engineering and Technology

87

then the pbest will become the new gbest. This process is

repeated until a satisfactory result is obtained.

B. PSO for Dynamic Systems

Several propositions have been made regarding the

modification of the PSO algorithm to address the dynamic

optimization problem, i.e., scenarios where the problem space

changes over time. In such situation the particles might lose its

global exploration ability due to changing position of global

optimum. This usually leads to unsatisfactory, unacceptable

and sub-optimal results.

Eberhart and Hu in [8] use a “fixed gBest-value method”

where the gbest value and the second-best gbest value are

monitored. If these two values do not change for certain

number of iterations then a possible optimum change is

declared. Actually the gbest value and second-best gbest value

are monitored to increase the accuracy and prevent false

alarms.

Another very successful PSO algorithm for dynamic

systems is the charged-PSO developed by Blackwell and

Bentley [10]. The driving principle behind charged PSO is a

good balance between exploration and exploitation, which in

turn results in continuous search for better solution while

refining the current soluiton. Rakitianskaia and Engelbrecht in

[11] have further modified the charged-PSO (CPSO) by

incorporating within it the concept of cooperative split PSO

(CSPSO). CSPSO is an approach where search space is

divided into smaller subspaces, with each subspace being

optimised by a separate swarm [12].

Hashemi and Meybodi introduced cellular PSO [13]. This is

a hybrid model of particle swarm optimization and cellular

automata where the population of particles is split into

different groups across cells of cellular automata by imposing

a restriction on number of particles in each cell. This was

further modified for dynamic systems by introducing

temporary quantam particles [14].

III. PROPOSED DRIFT PREDICTIVE PSO MODEL

(DRIP-PSO)

In this paper we propose a cost-effective and accurate PSO

model, DriP-PSO, which has been specifically designed for

the scenario where the problem-space drifts in an unknown

direction over time and an approximate amount of drift is

known. The algorithm determines the approximate direction in

which the problem-space is drifting so that the particle

velocities may be adjusted accordingly in the subsequent

iteration of the algorithm. This is achieved by selecting a few

stagnant particles which try to detect the direction of drift.

In each iteration of the DriP-PSO algorithm, a small

number of stagnant particles are selected randomly. The

stagnant particles do not change their positions for that

particular round. These stagnant particles would then compare

its previous fitness value to its current fitness value. If a

change is detected, the stagnant particles will generate four

sub-particles which will rest on a circular orbit of radius ρ.

Every stagnant particle will be the centre of its circular orbit,

and the sub-particles will be placed at right angle to one

another.

For example, let us consider a particle Pi, such that for a

particular round it has been selected as a stagnant particle. In

order to determine the direction of drift we select two sub-

particles,Sj,Pi and Sk,Pi

from among the four sub-particles of

particle Pi such that the previous fitness of the particle Pi lies

between the fitness values of the two selected sub-particles.

The approximate direction of drift, i.e. the direction in which

the adjustment ξ is required, is calculated by equation (2).

θ(ξ ) = θS j ,P i+ (θSk ,P i

− θS j ,P i) ×

α−S j ,P i

Sk ,P i−S j ,P i

(2)

In Equation (2), θ(ξ ) is the angle representing the direction

in which adjustment ξ has to be made, α is the previous fitness

value of the particle Pi, Sk,Piand Sj,Pi

are fitness of the selected

sub-particles between which the value α lies, θS j ,P i and θSk ,P i

are the angles at which the selected sub-particles are oriented.

If the previous fitness of the particle is greater than the

fitness value of all the sub-particles, then the direction along

the sub-particle with highest value is chosen. And if the

previous fitness of the particle is smaller than the fitness value

of all the sub-particles, then the direction along the sub-

particle with smallest value is chosen. A graphical

representation is shown in Fig. 1.

Figure 1. Graphical representation of drift evaluation using sub-particles

Sj,Pi and Sj,Pi

of particle Pi . Orbit radius is ρ.

Then, for all stagnant particles, their values of ξ are

averaged with weights and added as an extra term to the

velocity equation as shown in 3(a). The weights are evaluated

Page 3: A Fast and Inexpensive Particle Swarm Optimization for Drifting Problem-Spaces

2012 - International Conference on Emerging Trends in Science, Engineering and Technology

88

using the occurrence frequency of adjustment values. Weight

𝑤𝑖 for adjustment ξ i is calculated using the equation (3).

𝑤𝑖 =𝑓ξ 𝑖

𝑛 (3)

Here 𝑓ξ𝑖 is the number of times the value ξi occurs, and 𝑛 is

the total number of stagnant particles. The significance of the

weight is that values of ξ which are found to occur more

frequently are given more importance.

The assumption here is that the drift rate is near about ρ, i.e,

the sub-particle orbit radius. The value of ρ is chosen in such a

way that any change in the particle‟s vicinity, due to problem

space drift, is contained in close proximity to the sub-particles.

Algorithm 1 Drift Predictive PSO:

1. Initialize a population of particles scattered randomly over the problem space. These particles have arbitrary initial velocities.

2. Select randomly 𝑛 number of stagnant particles.

3. For each particle

a. Evaluate fitness of particle.

b. Evaluation of drift: For each stagnant particle, evaluate probable drift using equation (2)

c. Calculate weights wi corresponding to each ξ .

d. Change velocity according to the equation (4a):

𝑣𝑖𝑑 = 𝑤 × 𝑣𝑖𝑑 + 𝑐1 × 𝑟𝑎𝑛𝑑() × 𝑝𝑖𝑑 − 𝑥𝑖𝑑 + 𝑐2 ×

𝑟𝑎𝑛𝑑() × 𝑝𝑔𝑑 − 𝑥𝑖𝑑 + 𝜉𝑖𝑑𝑤𝑖𝑑

𝑤𝑖𝑑 (4a)

Change the position according to equation (4b):

𝑥𝑖𝑑 = 𝑥𝑖𝑑 + 𝑣𝑖𝑑 (4b)

Here w is inertia weight, c1 and c2 are cognitive and social acceleration constants, and rand() is a random number generator function. i is the particle index, g represents index of particle with

best fitness, 𝜉𝑖 is the adjustment required due to the dynamic change in the problem space and wid is the accuracy with which the drift is predicted. Subscript d indicates the d

th dimension.

e. If current fitness of particle is better than pbest, then set pbest value equal to current fitness. Set pbest location to current location.

f. If current fitness is better than gbest, reset gbest to current fitness value. Set gbest location to current location of particle.

g. Loop back to Step 2 until end criterion is satisfied, or maximum number of iterations is completed.

Algorithm 1 illustrates the step-by-step working of the proposed DriP-PSO for drifting problem-spaces.

IV. SIMULATION AND EXPERIMENTL RESULTS

We designed and implemented a test tool in WPF (.Net

Framework 4.0) for testing and comparing the proposed DriP-

PSO model with the standard PSO. Fig. 2 shows a screenshot

of the PSO test tool. Testing for the proposed PSO models

have been done for five functions, viz. the Sphere, Step,

Rastrigin, Rosenbrock and an arbitrary peak function as shown

in Table 1.

Figure 2. PSO TEST TOOL screenshot showing the arbitrary

peaks function

In order to simulate a dynamic system, we designed the test tool to drift the problem space in any direction, by applying an offset, λ, in every dimension, as given by equation (5)

fn+1 = fn(x - λ, y - λ) (5)

TABLE I. FUNCTIONS USED FOR TESTING

Functions Formula

Sphere f(x, y) = x2 + y2

Step f(x, y) = |x| + |y|

Rastrigin f(x,y) = 20 + x2 + y2 – 10(cos(2πx) + cos(2πy))

Rosenbrock f(x,y) = (1-x)2 + 100(y - x2)2

Arbitrary Peaks

f(x, y) = 1 – [3(1-x)2e-x2 – (y+1)2 + 10(x/5 – x3 – y5) e-(x2+y2) – 1/3e-(x+1)2-y2

Page 4: A Fast and Inexpensive Particle Swarm Optimization for Drifting Problem-Spaces

2012 - International Conference on Emerging Trends in Science, Engineering and Technology

89

For all functions the offset is varied in the range [0.01, 0.09]. Based on [15], in all experiments, the inertia weight w was set to 0.729844, and c1 and c2 were set to 1.49618 to increase convergent behavior.

The stagnant particles are selected randomly at run time.

Table 2 and 3 shows a comparison among standard PSO and Drift Predictive PSO in dynamic environment. The error percentage is calculated on the basis of actual minima and the minima detected by the PSO. All results are averages of 20 different runs.

TABLE II. RESULTS OF DIFFERENT PSOS IN DYNAMIC SCENARIO USING 25

PARTICLES

Percent error in finding global minima

Function Standard PSO Drift Predictive PSO

Sphere 6.799% 2.571%

Step 9.847% 2.091%

Rastrigin 29.900% 9.143%

Rosenbrock 24.616% 3.592%

Arbitrary Peaks 27.629% 5.126%

TABLE III. RESULTS OF DIFFERENT PSOS IN DYNAMIC SCENARIO USING 35

PARTICLES

Percent error in finding global minima

Function Standard PSO Drift Predictive PSO

Sphere 5.021% 1.871%

Step 8.268% 1.438%

Rastrigin 25.728% 7.895%

Rosenbrock 21.616% 2.332%

Arbitrary Peaks 25.744% 4.661%

V. CONCLUSION

The experimental results presented in this paper clearly

show that the proposed Drift Predictive PSO gives accurate

results for drifting problem spaces. It is stable and incurs less

computational cost.

ACKNOWLEDGEMENTS

We are grateful to Tuhin Bhuyan of Jorhat Engineering

College, India, who helped us in designing the class structure

of the PSO Test Tool.

REFERENCES

[1] J. Kennedy, R. C. Eberhart and Yuhui Shi, Swarm Intelligence, The

Morgan Kaufman series in Evolutionary Computation. San Francisco: Morgan Kaufman Publishers, 2001, pp 287

[2] R. C. Eberhart and J. Kennedy, “A new optimizer using particle swarm

theory” Proc. of the Sixth Int. Symp. on Micro Machine and Human

Science, Nagoya, Japan, pp. 39-43, 1995

[3] R. C. Eberhart and J. Kennedy, “Particle Swarm Optimization” Proc.

IEEE Int. Conf. on Neural Networks, Piscataway, NJ: IEEE Press, IV

pp. 1942-1948, 1995.

[4] F. van den Bergh, "An Analysis of Particle SwarmOptimizers," PhD

Dissertation, Department of Computer Science, University of Pretoria, Pretoria, South Africa, 2002.

[5] E. Papacostantis, “Coevolving Probabilistic Game Playing Agents Using

Particle Swarm Optimization Algorithms”, IEEE Symp. on Computational Intelligence for Financial Engineering and Economics,

2011

[6] L. Kezhong and W. Ruchuan, “Application of PSO and QPSO

algorithm to estimate parameters from kinetic model of glutamic acid

batch fermetation”, 7th World Congr. on Intelligent Control and Automation, 2008.

[7] E. Assareh, M.A. Behrang, M.R. Assari and A. Ghanbarzadeh,

“Application of PSO and GA techniques on demand estimation of oil in Iran”, The 3rd Int. Conf. on Sustainable Energy and Environmental

Protection, (SEEP „09), 2009.

[8] X. Hu and R. C. Eberhart, “Adaptive Particle Swarm Optimization:

Response to Dynamic Systems” Proc. of the 2002 Congr. on

Evolutionary Computation, 2002.

[9] X. Hu and R. C. Eberhart, “Tracking dynamic systems with PSO:

Where‟s the cheese”, Proc. of the workshop on Particle Swarm Optimization, Prude School of Engineering and Technology,

Indianapolis, 2001

[10] T. M. Blackwell and P. J. Bentley, “Dynamic Search with Charges Swarms”, in Proc. of Genetic and Evolutionary Computation Conf.,

(GECCO „02), Morgan Kaufmann Publishers, 2002, pp.9-13

[11] A. Rakitianskaia, and A. P. Engelbrecht, “Cooperative charged particle

swarm optimiser”, IEEE Congr. on Evolutionary Computation 2008.

CEC 2008, pp. 933-939.

[12] F. Van den Bergh and A. Engelbrecht, “A Cooperative approach to

particle swarm optimization”, IEEE Trans. on Evol. Comput., vol. 8, no.

3, pp. 225–239, June, 2004.

[13] A. B. Hashemi and M. R. Meybodi, "Cellular PSPo: A PSO for Dynamic

Environments," Proc. of 4th Int. Conf. Intelligence Computation and Applications (ISICA 2009), Huangshi, China, 2009.

[14] Ali B. Hashemi and M. R. Meybodi, “A Multi-Role Cellular PSO for

Dynamic Environments”, Proceedings of the 14th International CSI Computer Conference (CSICC'09) 2009

[15] R. C. Eberhart and Y. Shi, “Comparing inertia weights and constriction factors in particle swarm optimization”, In Proc. of the Congr. On

Evolutionary Computation, pp. 84–88, 2000.