Upload
qasimrehman
View
89
Download
0
Tags:
Embed Size (px)
Citation preview
Topic: Particle Swarm Optimization
Presented By:Syed Ibrahim AhmedQasim RehmanFaisal Arif
12024119-08512024119-13812024119-087
Particle Swarm Optimization
Find values of the variables that minimize or maximize the objective function while satisfying the constraints.
Goal Of Optimization
How To Get Optimization
We can Find the optimization level in any problem by:
• Providing Many Solutions of one problem
• Compute the complexity of all of them.
• Compare Every Solution’s complexities with others.
• See what is the best option with completing the constraints.
Types of Optimization Techniques
There are three major types of Optimization Techniques
1. Calculas Based Techniques: All numeric Based Problems are included in this technique.
2. Enumerative Techniques: involve evaluating each and every point of the finite,or discretized infinite, search space in order to arrive at the optimal solution, Dynamic Programming is the best example of Enumerative Techniques.
3. Random Techniques: Random Techniques are same as Enumerative Techniques but add some additional functionality like go to the solution space in random position like Genetic Algorithm Random choice is used as a tool to guide a highly explorative search through a coding of the parameter space. The guided random search methods are useful in problems where the search space is huge and complex. Particle Swarm Optimization is the part of the Random Techniques.
Particle Swarm OptimizationParticle Swarm optimization based on Swarm Intelligence.
Swarm Intelligence
• SI is Artificial Intelligence, based on the collective behavior of decentralized, self organized
Systems.
• The expression was introduced by Garardo Beni and Jing Wang in 1989, in the context of celluar
robotic systems.
• Si systems are typically made up of a population of simple agents interacting locally with on
another and with their environment.
• Natural examples of SI include and colonies, bird flocking, animal herding , bacterial growth and
fish schooling.
Swarm Intelligence Applications• U.S. military is investigating swarm techniques for controlling unmanned vehicles, like Drones
• NASA is investigating the use of swarm technology for planetary mapping.
Particle Swarm Optimization
• The PSO algorithm was first described in 1995 by James Kennedy and Russell C. Eberhart inspired by social behavior of bird flocking or fish schooling.
“PSO is an artificial intelligence (AI) technique that can be used to find approximate solutions to extremely difficult or impossible numeric maximization and minimization problems.”
• Hypotheses are plotted in this space and seeded with an initial velocity, as well as a communication channel between the particles.• Simple algorithm, easy to implement and few parameters to adjust mainly the velocity.
Particle Swarm Optimization How it work
• PSO is initialized with a group of random particles (solutions) and then searches for
optimal by updating generations.
• Particles move through the solution space, and are evaluated according to some fitness
criterion after each time step. In every iteration, each particle is updated by following
two "best" values.
Particle Swarm Optimization How it work
Searches Hyperspace of Problem for Optimum
n Define problem to search
p How many dimensions?
p Solution criteria?
n Initialize Population
p Random initial positions
p Random initial velocities
n Determine Global Best Position
n Determine Personal Best Position
n Update Velocity and Position Equations
Best Options In PS0
• The first one is the best solution (fitness) it has achieved so far (the fitness value is also stored). This value is called pbest.
Another "best" value that is tracked by the particle swarm optimizer is the best value obtained so far by any particle in the population. This second best value is a global best and called gbest.
Particle Swarm Optimization
Each particle tries to modify its current position and velocity according to the distance between its current position and pbest, and the distance between its current position and gbest.
Current Position[n+1] = Current Position [n] + v[n+1]
current position[n+1]: position of particle at n+1th
iteration
current position[n]: position of particle at nth iteration
v[n+1]: particle velocity at n+1th iteration
vn+1: Velocity of particle at n+1 th iteration
Vn : Velocity of particle at nth iteration
c1 : acceleration factor related to gbest
c2 : acceleration factor related to lbest
rand1( ): random number between 0 and 1
rand2( ): random number between 0 and 1
gbest: gbest position of swarm
pbest: pbest position of particle
Swarm Intelligence Algorithm
21( ) * ( CurrentPosition ) 2( ) * ( CurrentPositionn n1 1 best,n best,n )randv v c rand p c gnn
For each particle Initialize particle with feasible random numberEndDo For each particle Calculate the fitness value If the fitness value is better than the best fitness value (pbest) in history Set current value as the new pbest EndChoose the particle with the best fitness value of all the particles as the gbest For each particle Calculate particle velocity according to velocity update equation Update particle position according to position update equation End
While maximum iterations or minimum error criteria is not attained
Particle Swarm Algorithm
Particle Swarm Optimization
The process of PSO algorithm in finding optimal values follows the work of an animal society which has no leader.
Particle swarm optimization consists of a swarm of particles, where particle represent a potential solution (better condition).
Particle will move through a multidimensional search space to find the best position in that space (the best position may possible to the maximum or minimum values).
Summary
Particle Swarm Optimization
p PSOt – A Matlab Toolbox
p Function Optimization
p Neural Net Training
Application
PSOt – A Mat lab Toolbox
Matlab: Scientific computing language runin interpreter mode on a wide variety ofoperating systems.
Toolbox: Suite of Matlab ‘plug-in’programs developed by third parties.
Particle Swarm Optimization