34
Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: [email protected] Nov 2013

Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: [email protected]

Embed Size (px)

Citation preview

Page 1: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

Particle Swarm Optimization

A/Prof. Xiaodong LiSchool of Computer Science and IT, RMIT UniversityMelbourne, AustraliaEmail: [email protected]

Nov 2013

Page 2: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 2

Outline Background on Swarm Intelligence Introduction to Particle Swarm Optimization (PSO)

Original PSO, Inertia weight, constriction coefficient Particle Trajectories

Simplified PSO; one or two particles Convergence aspects FIPS, Bare-bones, and other PSO variants Communication topologies Further information

Page 3: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 3

Swarm Intelligence

Page 4: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 4

Swarm Intelligence

Swarm intelligence (SI) is an artificial intelligence technique based around the study of collective behavior in decentralized, self-organized systems.

SI systems are typically made up of a population of simple agents interacting locally with one another and with their environment. Although there is normally no centralized control structure dictating how individual agents should behave, local interactions between such agents often lead to the emergence of global behavior. Examples of systems like this can be found in nature, including ant colonies, bird flocking, animal herding, bacteria molding and fish schooling (from Wikipedia).

Page 5: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 5

Swarm IntelligenceMind is social…

Human intelligence results from social interaction:Evaluating, comparing, and imitating one another, learning from experience and emulating the successful behaviours of others, people are able to adapt to complex environments through the discovery of relatively optimal patterns of attitudes, beliefs, and behaviours (Kennedy & Eberhart, 2001).

Culture and cognition are inseparable consequences of human sociality:Culture emerges as individuals become more similar through mutual social learning. The sweep of culture moves individuals toward more adaptive patterns of thought and behaviour.

To model human intelligence, we should model individuals in a social context, interacting with one another.

Page 6: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 6

Particle Swarm Optimization

Russell EberhartJames Kennedy

The inventors:

Page 7: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 7

Particle Swarm OptimizationPSO has its roots in Artificial Life and social psychology, as well as engineering and computer science.

The particle swarms in some way are closely related to cellular automata (CA):

a) individual cell updates are done in parallel

b) each new cell value depends only on the old values of the cell and its neighbours, and

c) all cells are updated using the same rules (Rucker, 1999).

Individuals in a particle swarm can be conceptualized as cells in a CA, whose states change in many dimensions simultaneously.

Blinker

Glider

Page 8: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 8

Particle Swarm OptimizationAs described by the inventers James Kennedy and Russell Eberhart, “particle swarm algorithm imitates human (or insects) social behaviour. Individuals interact with one another while learning from their own experience, and gradually the population members move into better regions of the problem space”.

Why named as “particle”, not “points”? Both Kennedy and Eberhart felt that velocities and accelerations are more appropriately applied to particles.

Page 9: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 9

Particle Swarm OptimizationAs described by the inventers James Kennedy and Russell Eberhart, “particle swarm algorithm imitates human (or insects) social behaviour. Individuals interact with one another while learning from their own experience, and gradually the population members move into better regions of the problem space”.

Why named as “particle”, not “points”? Both Kennedy and Eberhart felt that velocities and accelerations are more appropriately applied to particles.

Page 10: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 10

PSO applicationsProblems with continuous, discrete, or mixed search space, with multiple local minima; problems with constraints; multiobjective, dynamic optimization.

Evolving neural networks:• Human tumor analysis;• Computer numerically controlled milling optimization;• Battery pack state-of-charge estimation;• Real-time training of neural networks (Diabetes among Pima Indians);• Servomechanism (time series prediction optimizing a neural network);

Reactive power and voltage control; Ingredient mix optimization; Pressure vessel (design a container of compressed air, with many

constraints); Compression spring (cylindrical compression spring with certain

mechanical characteristics); Moving Peaks (multiple peaks dynamic environment); and more

PSO can be tailor-designed to deal with specific real-world problems.

Page 11: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 11

Original PSO)()( 21 igiiii xpxpvv

iii vxx

and ;

and are two vectors of random numbers uniformly chosen from [0, 1];

and are acceleration coefficients.

111 rc

222 rc

1r

2r

1c 2c

denotes the current position of the i–th particle in the swarm;

denotes the velocity of the i-th particle;

the best position found by the i-th particle so far, i.e., personal best;

the best position found from the particle’s neighbourhood, i.e., global best;

The symbol denotes a point-wise vector multiplication;

ix

ip

iv

gp

Page 12: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 12

Original PSO

Velocity (which denotes the amount of change) of the i-th particle is determined by three components:

iv

)()( 21 igiiii xpxpvv

iii vxx

momentum

cognitive component

social component

momentum – previous velocity term to carry the particle in the direction it has travelled so far;

cognitive component – tendency to return to the best position visited so far;

social component – tendency to be attracted towards the best position found in its neighbourhood.

Neighbourhood topologies can be used to control information propagation between particles, e.g., ring, star, or von Neumann. lbest and gbest PSOs.

Page 13: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 13

Pseudo-code of a basic PSORandomly generate an initial population

repeat

for i = 1 to population_size do

if f( ) < f( ) then = ;

= min( );

for d =1 to dimensions do

velocity_update();

position_update();

end

end

until termination criterion is met.

ix

ip

ip

ix

gp

neighboursp

Page 14: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 14

Inertia weightThe and can be collapsed into a single term without losing any information:

where and

)( iii xpvv

iii vxx

21

21

gi pp

p21

Since the velocity term tends to keep the particle moving in the same direction as of

its previous flight, a coefficient inertia weight, w, can be used to control this influence:

)()( 21 igiiii xpxpvv

w

The inertia weighted PSO can converge under certain conditions without using Vmax.

represents the weighted average of and . Note that the division operator is a point-wise vector division.

gp

ip

p

ip

gp

p

Page 15: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 15

Inertia weightThe inertia weight can be used to control exploration and exploitation:

For w ≥ 1: velocities increase over time, swarm diverge;For 0 < w < 1: particles decelerate; convergence depends on value for c1 and c2;For w < 0: velocities decrease over time, eventually reaching 0; convergence behaviour.

Empirical results suggest that a constant inertia weight w = 0.7298 and

c1=c2=1.49618 provide good convergence behaviour.

Eberhart and Shi also suggested to use the inertia weight which decreasing

over time, typically from 0.9 to 0.4. It has the effect of narrowing the search,

gradually changing from an exploratory to an exploitative mode.

Page 16: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 16

Visualizing PSO

iv

ig xp

ip

gp

ii xp

iv

ix

ix

)(2 ii xp

)(1 ig xp

(updated)

Page 17: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 17

))()(( 21 igiiii xpxpvv

Constriction PSO

iii vxx

constriction factor

Typically, k is set to 1, and c1=c2=2.05; and the constriction coefficient is 0.7298 (Clerc and Kennedy 2002).

Clerc and Kennedy (2000) suggested a general PSO, where a constriction coefficient is applied to both terms of the velocity formula. The Constriction Type 1’’ PSO is equivalent to the inertia weighted PSO:

with , .where

|42|

22

k

21 and 111 rc 222 rc

If and k is in [0,1], then the swarm is guaranteed to converge. k controls the balance between exploration and exploitation.

4

Page 18: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 18

Particle Trajectory

Consider the Parabola 1D function, , defined in [-20, 20]. We have two cases:

2)( xxf

To answer this question, we can study a simplified PSO, and look at scenarios where the swarm is reduced to only one or two particles. This simplified PSO assumes: No stochastic component; One dimension; Pre-specified initial position and velocity.

Question: How important are the interactions between particles in a PSO?

1) The first two positions are on the same side of the minimum (Initial position x= -20, v=3.2)

2) The first two positions frame the minimum (initial position x=-2, v=6.4).

)()( 21 xpcxpcvv gi wvxx

In the following examples, we assume w=0.7, c1=c2=0.7. Note that even with just one particle, we actually know two positions, x and pi.

Acknowledgement: this example was taken from Clerc’s recent book “Particle Swarm Optimization, with some modifications.

Page 19: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 19

0

50

100

150

200

250

300

350

400

450

-30 -20 -10 0

x

fitn

ess

Particle Trajectory (one particle)

-1

0

1

2

3

4

5

6

7

8

9

-4 -2 0 2 4

x

fitn

ess

Case 1: The first two positions are on the same side of the minimum.

Since personal best is always equal to x, the particle is unable to reach the minimum (premature convergence).

Case 2: The first two positions frame the minimum.

The particle oscillates around the minimum; the personal best is not always equal to x, resulting in a better convergence behaviour.

Page 20: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 20

-4

-2

0

2

4

6

8

-4 -2 0 2 4

x

velo

cit

y

0

0.5

1

1.5

2

2.5

-20 -15 -10 -5 0

x

velo

cit

yParticle Trajectory (one particle)

Case 1: The first two positions are on the same side of the minimum.

Phase space graph showing v reaches to 0 too early, resulting premature convergence

Case 2: The first two positions frame the minimum.

Phase space graph showing v in both positive and negative values (spiral converging behaviour)

Page 21: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 21

Particle Trajectory (two particles)

m2

m1

2

1

Graph of influence. In this case, we have two explorers and two memories. Each explorer receives information from the two memories, but informs only one (Clerc, 2006).

Page 22: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 22

Particle Trajectory (two particles)

-50

0

50

100

150

200

250

300

350

400

450

-30 -20 -10 0 10

x

fitn

ess

-1

0

1

2

3

4

5

6

7

8

9

-4 -2 0 2 4

x

fitn

ess

Now we have two particles (two explorers and two memories). The starting positions for the two particles are the same as in Case 1 and 2. But now the particles are working together (Clerc, 2006).

Note, however, here, memory 2 is always better than memory 1, hence the course of explorer 2 is exactly the same as seen in the previous Case 2 (Figure on the right-hand side). On the other hand, explorer 1 will benefit from the information provided by memory 2, i.e., it will end up converging (Figure on the left) .

Page 23: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 23

Particle Trajectory (two particles)

-5

0

5

10

15

20

25

30

35

40

-10 -5 0 5

x

fitn

ess

Two explorers and two memories. This is the more general case where each explorer is from time to time influenced by the memory of the other, when it is better than its own. Convergence is more probable, though may be slower.

-5

0

5

10

15

20

25

-6 -4 -2 0 2 4

x

fitn

ess

Page 24: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 24

Particle Trajectory (two particles)

Two explorers and two memories. Particle trajectories in the Phase space. The two particles help each other to enter and remain in the oscillatory process that allows convergence towards the optimum.

-4

-2

0

2

4

6

8

-10 -5 0 5

position

velo

cit

y

Page 25: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 25

Potential Dangerous Property What happens when

Then the velocity update depends only on

If this condition persists for a number of iterations,

Solution: Let the global best particle perform a local search, and use mutation to break the condition.

gii ppx

ivw

0ivw

Page 26: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 26

Fully Informed Particle Swarm (FIPS)Previous velocity equation shows that that a particle tends to converge towards a point determined by , which is a weighted average of its previous best and the neighbourhood’s best . can be further generalized to any number of terms:

p

ip

gp

p

k k

k kpc

r

p

]

||,0[ max

N denotes the neighbourhood, and the best previous position found by the k-th particle in N. If the size of N equals 2, and then the above is a generalization of the canonical PSO.

kp

ipp

1 gpp

2

A significant implication of the above generalization is that it allows us to think more freely employing terms of influence other than just and .gp

ip

Page 27: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 27

Bare Bones PSOWhat if we drop the velocity term? Is it necessary?

Kennedy (2003) carried out some experiments using a PSO variant, which drops the velocity term from the PSO equation.

2/)( gdid pp || gdid pp

This bare bones PSO produces normally distributed random numbers around the mean (for each dimension d), with the standard deviation of the Gaussian distribution being .

pi pg

If pi and pg were kept constant, a canonical PSO samples the search space following a bell shaped distribution centered exactly between the pi and pg.

Page 28: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 28

Binary PSO

Position update changes to:

where

otherwise

tvsigUiftx ij

ij 0

))1(()1,0(1)1(

vevsig

1

1)(

Page 29: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 29

Some PSO variants Tribes (Clerc, 2006) – aims to adapt population size, so that it does not have to be set

by the users; Tribes have also been used for discrete, or mixed (discrete/continuous) problems.

ARPSO (Riget and Vesterstorm, 2002) – uses a diversity measure to alternate between 2 phases;

Dissipative PSO (Xie, et al., 2002) – increasing randomness;

PSO with self-organized criticality (Lovbjerg and Krink, 2002) – aims to improve diversity;

Self-organizing Hierachicl PSO (Ratnaweera, et al. 2004);

FDR-PSO (Veeramachaneni, et al., 2003) – using nearest neighbour interactions;

PSO with mutation (Higashi and Iba, 2003; Stacey, et al., 2004)

Cooperative PSO (van den Bergh and Engelbrecht, 2005) – a cooperative approach

DEPSO (Zhang and Xie, 2003) – aims to combine DE with PSO;

CLPSO (Liang, et al., 2006) – incorporate learning from more previous best particles.

Page 30: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 30

Communication topologies (1)

Two most common models:

gbest: each particle is influenced by the best found from the entire swarm.

lbest: each particle is influenced only by particles in local neighbourhood.

Page 31: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 31

Communication topologies (2)

7

6

5

1

3

2

4 Graph of influence of a swarm of 7 particles. For each arc, the particle origin influence (informs) the end particle (Clerc, 2006)

This graph of influence can be also expanded to include previous best positions (i.e., memories).

Page 32: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 32

Communication topologies (3)

Global Island model Fine-grained

Page 33: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 33

Communication topologies (4)Which one to use?

Balance between exploration and exploitation…

gbest model propagate information the fastest in the population; while the lbest model using a ring structure the slowest. For complex multimodal functions, propagating information the fastest might not be desirable. However, if this is too slow, then it might incur higher computational cost.

Mendes and Kennedy (2002) found that von Neumann topology (north, south, east and west, of each particle placed on a 2 dimensional lattice) seems to be an overall winner among many different communication topologies.

Page 34: Particle Swarm Optimization A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia Email: xiaodong.li@rmit.edu.au

18/04/23 34

Readings on PSO

Riccardo Poli, James Kennedy, and Tim Blackwell (2007), "Particle swarm optimization - An Overview", Swarm Intelligence, 1: 33–57

Kennedy, J. Eberhart, R.C., and Shi, Y. (2001), Swarm Intelligence, New York: Morgan Kaufmann Publishers.