Upload
ngonhan
View
228
Download
2
Embed Size (px)
Citation preview
4/27/2016
1
PARTICLE SWARM
OPTIMIZATION
Particle Swarm Optimization
� Proposed by James Kennedy & Russell Eberhart (1995)
� Applications: Traveling Salesman Problem, Vehicle
Routing, Quadratic Assignment Problem, Internet
Routing, Logistic Scheduling...
� There are also some applications of PSO in clustering
and data mining problems.
554
� Inspired from the nature social
behavior and dynamic
movements with
communications of insects,
birds, fish, etc.
4/27/2016
2
Swarm behavior
In 1986, Craig Reynolds described this process in 3 simple
behaviors:
� Separation – avoid crowding local flockmates;
� Alignment – move towards the average heading of
local flockmates;
� Cohesion – move toward the average position of local
flockmates.
555
Swarm behavior
556
4/27/2016
3
Definitions
� Particles – changing solutions
� Swarm – collection of flying particles
� Search area – possible solutions
� Position and velocity – particle movement towards a
promising area to get the global optimum
� Each particle keeps track of:
� its best solution, personal best, pbest
� the best value of any particle, global best, gbest
557
Artificial swarm
� Swarm foraging:
� Uses a number of agents (particles) that constitute a
swarm moving around in the search space looking for
the best solution.
� Swarm movement:
� Each particle in the search space adjusts its
“movement” according to its own moving experience as
well as the moving experience of other particles
558
� Swarm management:
� Combines self-experiences with
social experiences.
4/27/2016
4
Particle movement
� Each particle adjusts its travelling speed dynamically
corresponding to the flying experiences of itself and its
colleagues
� Each particle modifies its position according to:
� its current position
� its current velocity
� the distance between its current position and pbest
� the distance between its current position and gbest
559
Parameters
� P : population of agents
� xi : position of agent pi in the solution space
� vi : velocity of agent pi
� f : objective function
� V(pi) : neighborhood of agent pi (fixed)
� The neighborhood concept in PSO is not the same as in
other metaheuristics, since in PSO each particle
neighborhood never changes (is fixed).
560
4/27/2016
5
Algorithm
1. Initialize positions and velocities for P
2. Evaluate each particle pi in the swarm
3. If f(xi) is better than f(pbest) then
pbest = xi;
4. If f(xi) is better than f(gbest) then
gbest = xi (best xi in P);
5. Update positions and velocities
vi = w vi + c1 q (pbest – xi)/ ∆t + c2 r (gbest – xi)/ ∆t;
xi = xi + vi ∆t
6. Restrict the velocities
7. Return to step 2 until finish561
Particle update rule
� xij: particle position
� vij: particle velocity (direction)
� w: inertia weight (convergence “velocity”)
� c1: cognitive weight (local information)
� c2: social weight (global information)
� pbest: best position of the particle
� gbest: best position of the swarm
� r and q: random variables in [0,1]562
Cognitive learning factor
Social learning factor
Inertia weight
4/27/2016
6
Particle update rule
� Intensification: explores the previous solutions, finds
the best solution of a given region
� Diversification: searches new solutions, finds the
regions with potentially the best solutions
563
Diversification Intensification
Typical parameters
� Number of particles P is usually between 10 and 50
� c1 is the importance of personal best value
� c2 is the importance of neighborhood best value
� Usually c1 + c2 = 4 (value chosen empirically)
� If velocity v is too low → algorithm too slow
� If velocity v is too high → algorithm too unstable
564
4/27/2016
7
Characteristics
� Advantages
� Insensitive to scaling of design variables
� Simple implementation
� Easily parallelized for concurrent processing
� Derivative free
� Very few algorithm parameters
� Very efficient global search algorithm
� Drawbacks
� Fast and premature convergence in mid optimum points
� Slow convergence in refined search stage (weak local
search ability)
565
Different approaches
� 2-D Otsu PSO
� Active Target PSO
� Adaptive PSO
� Adaptive Mutation PSO
� Adaptive PSO Guided by Acceleration Information
� Attractive Repulsive Particle Swarm Optimization
� Binary PSO
� Cooperative Multiple PSO
� Dynamic and Adjustable PSO
� …
Davoud Sedighizadeh and Ellips Masehian, “Particle Swarm Optimization Methods,
Taxonomy and Applications”. International Journal of Computer Theory and Engineering,
Vol. 1, No. 5, December 2009
566
4/27/2016
8
Toolbox
� MatLab toolbox (PSOt by Brian Birge):
http://www.mathworks.com/matlabcentral/fileexchange
/7506-particle-swarm-optimization-toolbox
� Schaffer function optimization:
� Minimization problem
� P = 25 particles
567
Schaffer function example
568
4/27/2016
9
Schaffer function example
569
Matlab results
570
Best fit parameters:
---------------------------------
input1 = 1.6245e-09
input2 = 1.6957e-09
cost = 0
mean cost = 0.005901
# of epochs = 355