26
Particle Swarm Optimization in Scilab ver 0.1-7 S. SALMON, Research engineer and PhD. student at M3M - UTBM Abstract This document introduces the Particle Swarm Optimization (PSO) in Scilab. The PSO is a meta-heuristic optimization process created by Kennedy and Eberhart in 1995. Three PSO are implanted in this toolbox : the "Inertia Weight Model" by Shi & Eberhart in 1998, the "Radius" and the "BSG-Starcraft" by the author. Source code is released under CC-BY-NC-SA. 1 Introduction In order to treat optimization cases, two main optimization families are available (excepted hybrids methods): gradient based methods such as Newton, conjugate gradient, ...; meta-heuristic methods such as Nelder-Mead, Torzcon, simulated annealing, ant colonies, genetics algorithms ... . Gradient based methods are non-linear sensitive and so may not converge to a good solution due to the need of derivative evaluation. Meta-heuristic methods are design to such problem thanks to only required the direct evaluation of the objective function. In the objective to treat non-linear problem with a simple optimization process, the Particle Swarm Optimization appears to be well adapted. 2 The Particle Swarm Optimization The PSO method, published by Kennedy and Eberhart in 1995 [4], is based on a population of points at first stochastically deployed on a search field. Each member of this particle swarm could be a solution of the optimization problem. This swarm flies in the search field (of N dimensions) and each member of it is attracted by its personal best solution and by the best solution of its neighbours [3, 1]. Each particle has a memory storing all data relating to its flight (location, speed and its personal best solution). It can also inform its neighbours, i.e. communicate its speed and position. This ability is known as socialisation. For each iteration, the objective function is evaluated for every member of the swarm. Then the leader of the whole swarm can be determined: it is the particle with the best personal solution. The process leads at the end to the best global solution. This direct search method does not require any knowledge of the objective function derivatives. At each iteration, the location and speed of one particle are updated. The basic method proposed in [4] (Eq. 1) : 1

Particle Swarm Optimization in Scilab ver 0.1-7forge.scilab.org/index.php/p/pso-toolbox/downloads/get/Doc_PSO... · Particle Swarm Optimization in Scilab ver 0.1-7 S. SALMON, Research

  • Upload
    lymien

  • View
    235

  • Download
    0

Embed Size (px)

Citation preview

Particle Swarm Optimization in Scilabver 0.1-7

S. SALMON, Research engineer and PhD. student at M3M - UTBM

Abstract

This document introduces the Particle Swarm Optimization (PSO) in Scilab. The PSOis a meta-heuristic optimization process created by Kennedy and Eberhart in 1995. ThreePSO are implanted in this toolbox : the "Inertia Weight Model" by Shi & Eberhart in 1998,the "Radius" and the "BSG-Starcraft" by the author.Source code is released under CC-BY-NC-SA.

1 IntroductionIn order to treat optimization cases, two main optimization families are available (exceptedhybrids methods):

• gradient based methods such as Newton, conjugate gradient, ...;

• meta-heuristic methods such as Nelder-Mead, Torzcon, simulated annealing, ant colonies,genetics algorithms ... .

Gradient based methods are non-linear sensitive and so may not converge to a good solutiondue to the need of derivative evaluation. Meta-heuristic methods are design to such problemthanks to only required the direct evaluation of the objective function. In the objective to treatnon-linear problem with a simple optimization process, the Particle Swarm Optimization appearsto be well adapted.

2 The Particle Swarm OptimizationThe PSO method, published by Kennedy and Eberhart in 1995 [4], is based on a population ofpoints at first stochastically deployed on a search field. Each member of this particle swarm couldbe a solution of the optimization problem. This swarm flies in the search field (of N dimensions)and each member of it is attracted by its personal best solution and by the best solution of itsneighbours [3, 1]. Each particle has a memory storing all data relating to its flight (location,speed and its personal best solution). It can also inform its neighbours, i.e. communicate itsspeed and position. This ability is known as socialisation. For each iteration, the objectivefunction is evaluated for every member of the swarm. Then the leader of the whole swarm canbe determined: it is the particle with the best personal solution. The process leads at the endto the best global solution. This direct search method does not require any knowledge of theobjective function derivatives.

At each iteration, the location and speed of one particle are updated. The basic methodproposed in [4] (Eq. 1) :

1

vt+1 = vt +R1.C1.(g − xt) +R2.C2.(p− xt)

xt+1 = xt + vt+1

(1)

where C1 and C2 are learning parameters, R1 and R2 are random numbers, g is the locationof the leader and p the personal best location.

This equation reveals the particle leader location to each particle.

3 The Inertia Weigth ModelA variant of the PSO method has been developped by Shi & Eberhart in 1998 in which amodification of the speed equation improves the convergence by inserting a time dependantvariable: this is the "Inertia Weight Model" [7] (Eq. 2) :

vt+1 = ωt.vt +R1.C1.(g − xt) +R2.C2.(p− xt) (2)

Decreasing the variable ω enables to slow down the speed of the particle around the leaderlocation and provide a balance between exploration and exploitation.

Particles trajectories have been studied in [3, 6, 2] and the parameter selection of the particleswarm in [7, 2].

4 The Radius improvementThis improvement, developed by the author and based on the "Inertia weight Model", consistsin stopping the optimization process when the swarm became too small. When optimizing areal system such as actuators, waveforms or other physical devices, there are some materiallimitations due to sensors errors, milling defaults ... And so it became useless to continue thecomputation while differencing particles with real measurement devices is impossible.

A minimum radius is defined and while the swarm radius (using an infinite norm) is higherthan the minimum radius, the optimization process continue. When the swarm radius becomeinferior to the minimum, a counter starts for 10 iterations. If one particle escapes from theminimum radius then the counter is reset else the optimization process is stopped.

5 The BSG-Starcraft improvementBased on the "Inertia weight model" and developed by the author, the Battlestar Galactica (BSG)- Starcraft improvement is based on two ideas inspired from the science fiction film BattlestarGalactica and a video game Starcraft. Here the two ideas :

• the particle leader (the carrier) has the ability to send randomly some new particles to fastexplore the space (raptors);

• if one raptor find a best position than the global best then the swarm jump (FTL jump),conserving the swarm geometry, to this new location. The carrier location is now the raptorone.

This improvement is in evaluation stage and could be useful when the swarm is initially longaway from the objective function minimum.

2

6 How to use and comparison on test case

6.1 How to useThe toolbox is available via Atoms in Scilab or from the Scilab forge. Those PSO methods aredesigned to be mono-objective. So in order take into account multi-objective systems, the userhas to reduce the size of the objective function output using for example a L2 norm.

First step :

An objective function has to be created in a script.sce file for example:

function f=script(x)

f=60+sum((x.^2)-10*cos(2*%pi.*x)); // Rastrigin’s function R6

endfunction;

Second step :

Create a command file to step up the PSO chosen and execute the file:

clear

lines(0)

objective=’script’ // the objective function file

wmax=0.9; // initial inertia

wmin=0.4; // final inertia

itmax=200; // maximum iteration allowed

c1=2; // personal best knowledge factor

c2=2; // global best knowledge factor

N=20; // number of particle

D=6; // problem dimension

borne_sup=20*[10 10 10 10 10 10]’; // location min. milestone

borne_inf=10*borne_sup; // location max. milestone

vitesse_max=[10 10 10 10 10 10]’; // max. speed milestone

vitesse_min=-1*vitesse_max; // min. speed milestone

radius=1e-4; // minimal radius

//executing PSO

// for inertial PSO

PSO_inertial(objective,wmax,wmin,itmax,c1,c2,N,D,borne_sup,borne_inf,vitesse_min,vitesse_max)

// for inertial radius PSO

PSO_inertial_radius(objective,wmax,wmin,itmax,c1,c2,N,D,borne_sup,borne_inf,vitesse_min,vitesse_max,radius)

// for BSG Starcraft PSO

PSO_bsg_starcraft(objective,wmax,wmin,itmax,c1,c2,N,D,borne_sup,borne_inf,vitesse_min,vitesse_max)

// for BSG Starcraft radius PSO

PSO_bsg_starcraft_radius(objective,wmax,wmin,itmax,c1,c2,N,D,borne_sup,borne_inf,vitesse_min,vitesse_max,radius)

6.2 Test cases :Those test cases are chosen from the review realised by Molga and Smutnicki [5] and are designedto benchmark optimization algorithms on multi-modal and/or multidimensional and/or withmany locals extremes. Each test case is repeated 100 times for each optimization program.

3

6.2.1 Rastrigin’s function:

The Rastrigin’s function is defined by (Eq. 3):

f(x) = 10n+

n∑i=1

[x2i − 10cos (2πxi)

](3)

The solution is located in zero of ℜn and the function value is also zero. The test case is a20 dimension problem, PSO parameters are defined in Table 1 and results in Table 2:

Parameter limits Valuelocation inf. ℜ20 -5.12location sup. ℜ20 5.12

speed inf. ℜ20 -0.512speed sup. ℜ20 0.512

radius 1e-3max. iteration 800particle number 20

Table 1: Common PSO parameters

Inertial Radius BSG-Starcraft BSG-Starcraft radiusMean Final value 88.31 95.4 55.71 63.81Mean Iteration 800 503.4 800 508.8

Table 2: Results for Rastringin’s function

6.2.2 De Jong 1’s function:

The De Jong 1’s function is defined by (Eq. 4):

f(x) =

n∑i=1

[x2i

](4)

The solution is located in zero of ℜn and the function value is also zero. The test case is a20 dimension problem, PSO parameters are defined in Table 3 and results in Table 4:

Parameter limits Valuelocation inf. ℜ20 -5.12location sup. ℜ20 5.12

speed inf. ℜ20 -0.512speed sup. ℜ20 0.512

radius 1e-3max. iteration 800particle number 20

Table 3: Common PSO parameters

4

Inertial Radius BSG-Starcraft BSG-Starcraft radiusMean Final value 5.97 9.14 1.56 1.39Mean Iteration 800 493.6 800 527.3

Table 4: Results for De Jong 1’s function

6.2.3 Ackley’s function:

The Ackley’s function is defined by (Eq. 5):

a = 30

b = 0.2

c = 2π

f(x) = −a. exp

−b

√√√√ 1

n

n∑i=1

x2i

− exp

(1

n

n∑i=1

cos(cxi)

)+ a+ exp(1)

(5)

The solution is located in zero of ℜn and the function value is also zero. The test case is a20 dimension problem, PSO parameters are defined in Table 5 and results in Table 6:

Parameter limits Valuelocation inf. ℜ20 -32.768location sup. ℜ20 32.768

speed inf. ℜ20 -3.2768speed sup. ℜ20 3.2768

radius 1e-3max. iteration 800particle number 20

Table 5: Common PSO parameters

Inertial Radius BSG-Starcraft BSG-Starcraft radiusMean Final value 18.28 18.7 12.79 11.92Mean Iteration 800 502 800 581.5

Table 6: Results for Ackley’s function

6.2.4 Drop wave’s function:

The Drop wave’s function is defined by (Eq. 6):

f(x1, x2) = −1 + cos(12√x21 + x2

2)

0.5(x21 + x2

2) + 2(6)

The solution is located in zero of ℜ2 and the function value is −1. The test case is a 20dimension problem, PSO parameters are defined in Table 7 and results in Table 8:

5

Parameter limits Valuelocation inf. ℜ2 -5.12location sup. ℜ2 5.12

speed inf. ℜ2 -0.512speed sup. ℜ2 0.512

radius 1e-3max. iteration 800particle number 20

Table 7: Common PSO parameters

Inertial Radius BSG-Starcraft BSG-Starcraft radiusMean Final value -0.99 -0.98 -0.99 -1Mean Iteration 800 510.92 800 485.5

Table 8: Results for Drop wave’s function

6.2.5 Inertia Weight Model vs BSG-Starcraft

Here we are going to compare the ability to the PSO to converge to the global minimum in twocases on a 6 dimension Rastrigin’s function : in the first case, the global minimum is in the initialswarm and the swarm isn’t too large (Table 9), in the second case the swarm is large and longaway from the minimum (Table 10). The maximum iteration number is set to 200.

Parameter limits Value ℜ6

location inf. -10location sup. 10

speed inf. -1speed sup. 1

Table 9: Case 1 parameters

Parameter limits Value ℜ6

location inf. -100location sup. 100

speed inf. -10speed sup. 10

Table 10: Case 2 parameters

6

Mean Final value Inertia weight Model BSG-StarcraftCase 1 12.92 13.39Case 2 4462.29 1458.38

Table 11: Results for Inertia Weigth Model vs BSG-Starcraft

We can notice that the BSG-Starcraft model is at least as effective as the inertial PSO forsmall range swarms and really effective in case of high range swarms (Table 11).

7 ConclusionThe Particle Swarm Optimization as been used in many optimization cases both in linear andnon-linear problems. This optimization process appears to be effective and simple to use. Bothproposed improvements are also effective and may be combined to create a new PSO model.

Appendix

Inertial PSO

// Created by Sebastien Salmon

// M3M - UTBM

// sebastien[.]salmon[@]utbm[.]fr

// 2010

// released under CC-BY-NC-SA

function PSO_inertial(objective,wmax,wmin,itmax,c1,c2,N,D,borne_sup,borne_inf,vitesse_min,vitesse_max)

lines(0)

//---------------------------------------------------

// Setting up graphics axis

//---------------------------------------------------

scf(1)

gcf()

xtitle("Objective function value vs Iteration")

axe_prop=gca()

axe_prop.x_label.text="Iteration number"

axe_prop.y_label.text="Objective function value"

//---------------------------------------------------

// Declaring objective function

//---------------------------------------------------

var=objective+’.sce’;

exec(var)

//---------------------------------------------------

// PSO parameters definition

//---------------------------------------------------

// using inertial weigth parameter improvement

//wmax=0.9; // initial weigth parameter

//wmin=0.4; // final weigth parameter

7

// max iteration allowed

//itmax=800; //Maximum iteration number

// knowledge factors

//c1=2; // for personnal best

//c2=2; // for global best

// problem dimensions

//N=20; // number of particles

//D=6; // problem dimension

//---------------------------------------------------

// Allocation of memory ans first computations

//---------------------------------------------------

// computation of the weigth vector

for i=1:itmax

W(i)=wmax-((wmax-wmin)/itmax)*i;

end

// computation of location and speed of particles

for i=1:D

//borne_sup(i)= 1;

//borne_inf(i)= 0;

x(1:N,i) =borne_inf(i) +rand(N,1) * ( borne_sup(i) - borne_inf(i) ); // location

//vitesse_min(i)=-0.3;

//vitesse_max(i)=0.3;

v(1:N,i)=vitesse_min(i)+(vitesse_max(i)-vitesse_min(i))*rand(N,1); // speed

end

// actual iteration number

j=1;

//---------------------------------------------------

// First evaluation of the objective function

//---------------------------------------------------

for i=1:N

y=x(i,:,j);

F(i,1,j)=script(y); // mono-objective result

end

//---------------------------------------------------

// Search for the minimum of the swarm

//---------------------------------------------------

[C,I]=min((F(:,1,j)));

//---------------------------------------------------

// The first minimun is the global minimum ’cause first

// iteration

//---------------------------------------------------

gbest(1,:,j)=x(I,:,j);

for p=1:N

G(p,:,j)=gbest(1,:,j); // creating a matrix of gbest, used for speed computation

end

//---------------------------------------------------

// The first minimun is the best result ’cause first

8

// iteration

//---------------------------------------------------

Fbest(1,1,j)=F(I,1,j); // global best

Fb(1,1,j)=F(I,1,j); // iteration best, used for comparison with global best

//---------------------------------------------------

// Each particle is her personnal best ’cause first

// first iteration

//---------------------------------------------------

for i=1:N

pbest(i,:,j)=x(i,:,j);

end

//---------------------------------------------------

// Speed and location computation for next iteration

//---------------------------------------------------

v(:,:,j+1)=W(j)*v(:,:,j)+c1*rand()*(pbest(:,:,j)-x(:,:,j))+c2*rand()*(G(:,:,j)-x(:,:,j)); // speed

x(:,:,j+1)=x(:,:,j)+v(:,:,j+1); // location

//---------------------------------------------------

// Entering to the optimization loop

//---------------------------------------------------

while (j<itmax-1)

j=j+1

//---------------------------------------------------

// First evaluation of the objective function

//---------------------------------------------------

for i=1:N

y=x(i,:,j);

F(i,1,j)=script(y);

end

//---------------------------------------------------

// Search for the minimum of the swarm

//---------------------------------------------------

[C,I]=min((F(:,:,j)));

//---------------------------------------------------

// Searching for global minimum

//---------------------------------------------------

gbest(1,:,j)=x(I,:,j); // hypothesis : this iteration is better than last one

Fb(1,1,j)=F(I,1,j); // looking for the iteration best result

Fbest(1,:,j)=Fb(1,:,j); // Fbest is the iteration best result

if Fbest(1,1,j)<Fbest(1,1,j-1) // check if actual Fbest is better than the previous one

else

gbest(1,:,j)=gbest(1,:,j-1); // if not then replacing with the good gbest

Fbest(1,:,j)=Fbest(1,:,j-1); // A new Fbest has not be found this time

end

for p=1:N

G(p,:,j)=gbest(1,:,j); // creating a matrix of gbest, used for speed computation

end

//---------------------------------------------------

// Computation of the new personnal best

//---------------------------------------------------

for i=1:N

[C,I]=min(F(i,1,:));

9

if F(i,1,j)<C

pbest(i,:,j)=x(i,:,j);

else

pbest(i,:,j)=x(i,:,I(3));

end

end

//---------------------------------------------------

// Re-computation of Fbest for reliability test

//---------------------------------------------------

y=gbest(1,:,j);

Fbest(:,:,j)=script(y);

//---------------------------------------------------

// Speed and location computation for next iteration

//---------------------------------------------------

v(:,:,j+1)=W(j)*v(:,:,j)+c1*rand()*(pbest(:,:,j)-x(:,:,j))+c2*rand()*(G(:,:,j)-x(:,:,j)); // speed

x(:,:,j+1)=x(:,:,j)+v(:,:,j+1); // location

//---------------------------------------------------

// Plotting the Fbest curve to monitor optimization

//---------------------------------------------------

for count=1:j

Fbest_draw(count)=Fbest(1,1,count);

end

if modulo(j,25)==0 // resetting graphics

clf(1)

scf(1)

gcf()

xtitle("Objective function value vs Iteration")

axe_prop=gca()

axe_prop.x_label.text="Iteration number"

axe_prop.y_label.text="Objective function value"

else

scf(1)

end

plot(Fbest_draw)

drawnow()

//---------------------------------------------------

// Temporary save in case of crash, very usefull for long time optimization

//---------------------------------------------------

save (’PSO_temp’)

//---------------------------------------------------

// Out of the optimization loop

//---------------------------------------------------

end

disp(’Fbest :’ + string(Fbest(:,:,j)))

disp(’Gbest :’ + string(gbest(:,:,j)))

10

save(’results_PSO’)

endfunction

Radius PSO

// Created by Sebastien Salmon

// M3M - UTBM

// sebastien[.]salmon[@]utbm[.]fr

// 2010

// released under CC-BY-NC-SA

function PSO_inertial_radius(objective,wmax,wmin,itmax,c1,c2,N,D,borne_sup,borne_inf,vitesse_min,vitesse_max,radius)

lines(0)

//---------------------------------------------------

// Setting up graphics axis

//---------------------------------------------------

scf(1)

gcf()

xtitle("Objective function value vs Iteration")

axe_prop=gca()

axe_prop.x_label.text="Iteration number"

axe_prop.y_label.text="Objective function value"

scf(2)

gcf()

xtitle("Swarm radius vs Iteration")

axe_prop=gca()

axe_prop.x_label.text="Iteration number"

axe_prop.y_label.text="Swarm radius (log10)"

//---------------------------------------------------

// Declaring objective function

//---------------------------------------------------

var=objective+’.sce’;

exec(var)

//---------------------------------------------------

// PSO parameters definition

//---------------------------------------------------

// swarm limit radius

// radius=1e-4; // limit radius of the swarm, if inf then stopping process

// using inertial weigth parameter improvement

//wmax=0.9; // initial weigth parameter

//wmin=0.4; // final weigth parameter

// max iteration allowed

//itmax=800; //Maximum iteration number

// knowledge factors

11

//c1=2; // for personnal best

//c2=2; // for global best

// problem dimensions

//N=20; // number of particles

//D=6; // problem dimension

//---------------------------------------------------

// Allocation of memory ans first computations

//---------------------------------------------------

// computation of the weigth vector

for i=1:itmax

W(i)=wmax-((wmax-wmin)/itmax)*i;

end

// computation of location and speed of particles

for i=1:D

//borne_sup(i)= 1;

//borne_inf(i)= 0;

x(1:N,i) =borne_inf(i) +rand(N,1) * ( borne_sup(i) - borne_inf(i) ); // location

//vitesse_min(i)=-0.3;

//vitesse_max(i)=0.3;

v(1:N,i)=vitesse_min(i)+(vitesse_max(i)-vitesse_min(i))*rand(N,1); // speed

end

// actual iteration number

j=1;

// actual mesurability status

mesurability=1;

counter=0;

//---------------------------------------------------

// First evaluation of the objective function

//---------------------------------------------------

for i=1:N

y=x(i,:,j);

F(i,1,j)=script(y); // mono-objective result

end

//---------------------------------------------------

// Search for the minimum of the swarm

//---------------------------------------------------

[C,I]=min((F(:,1,j)));

//---------------------------------------------------

// The first minimun is the global minimum ’cause first

// iteration

//---------------------------------------------------

gbest(1,:,j)=x(I,:,j);

for p=1:N

G(p,:,j)=gbest(1,:,j); // creating a matrix of gbest, used for speed computation

end

//---------------------------------------------------

// The first minimun is the best result ’cause first

12

// iteration

//---------------------------------------------------

Fbest(1,1,j)=F(I,1,j); // global best

Fb(1,1,j)=F(I,1,j); // iteration best, used for comparison with global best

//---------------------------------------------------

// Each particle is her personnal best ’cause first

// first iteration

//---------------------------------------------------

for i=1:N

pbest(i,:,j)=x(i,:,j);

end

//---------------------------------------------------

// Speed and location computation for next iteration

//---------------------------------------------------

v(:,:,j+1)=W(j)*v(:,:,j)+c1*rand()*(pbest(:,:,j)-x(:,:,j))+c2*rand()*(G(:,:,j)-x(:,:,j)); // speed

x(:,:,j+1)=x(:,:,j)+v(:,:,j+1); // location

//---------------------------------------------------

// Entering to the optimization loop

//---------------------------------------------------

while (j<itmax-1 & mesurability==1)

j=j+1

//---------------------------------------------------

// First evaluation of the objective function

//---------------------------------------------------

for i=1:N

y=x(i,:,j);

F(i,1,j)=script(y);

end

//---------------------------------------------------

// Search for the minimum of the swarm

//---------------------------------------------------

[C,I]=min((F(:,:,j)));

//---------------------------------------------------

// Searching for global minimum

//---------------------------------------------------

gbest(1,:,j)=x(I,:,j); // hypothesis : this iteration is better than last one

Fb(1,1,j)=F(I,1,j); // looking for the iteration best result

Fbest(1,:,j)=Fb(1,:,j); // hypothesis : Fbest is the iteration best result

if Fbest(1,1,j)<Fbest(1,1,j-1) // check if actual Fbest is better than the previous one

else

gbest(1,:,j)=gbest(1,:,j-1); // if not then replacing with the good gbest

Fbest(1,:,j)=Fbest(1,:,j-1); // A new Fbest has not be found this time

end

for p=1:N

G(p,:,j)=gbest(1,:,j); // creating a matrix of gbest, used for speed computation

end

//---------------------------------------------------

// Computation of the new personnal best

//---------------------------------------------------

for i=1:N

[C,I]=min(F(i,1,:));

13

if F(i,1,j)<C

pbest(i,:,j)=x(i,:,j);

else

pbest(i,:,j)=x(i,:,I(3));

end

end

//---------------------------------------------------

// Re-computation of Fbest for reliability test

//---------------------------------------------------

y=gbest(1,:,j);

Fbest(:,:,j)=script(y);

//---------------------------------------------------

// Speed and location computation for next iteration

//---------------------------------------------------

v(:,:,j+1)=W(j)*v(:,:,j)+c1*rand()*(pbest(:,:,j)-x(:,:,j))+c2*rand()*(G(:,:,j)-x(:,:,j)); // speed

x(:,:,j+1)=x(:,:,j)+v(:,:,j+1); // location

//---------------------------------------------------

// Computing measurability - generating capacity

//---------------------------------------------------

[C,I]=min((F(:,:,j)));

leader_temp(1,:,j)=x(I,:,j); // getting swarm leader

for t=1:N

dist_temp(:,t,j) = abs((x(t,:,j)-leader_temp(:,:,j))); // computing L1 norm to leader

end

for t=1:N

dist(t,j)=max(dist_temp(:,t,j));

end

max_dist(j)=max(dist(:,j));

if max_dist(j)>=radius & counter<10

mesurability=1;

counter=0;

else

counter=counter+1;

end

if counter==10

mesurability=0;

end

rad_plot(j)=radius;

if modulo(j,25)==0 // resetting graphics

clf(2)

scf(2)

gcf()

xtitle("Swarm radius vs Iteration")

axe_prop=gca()

axe_prop.x_label.text="Iteration number"

axe_prop.y_label.text="Swarm radius (log10)"

else

14

scf(2)

end

plot(log10(max_dist(2:j)) )

plot(log10(rad_plot(2:j)),’r’)

drawnow()

//---------------------------------------------------

// Plotting the Fbest curve to monitor optimization

//---------------------------------------------------

for count=1:j

Fbest_draw(count)=Fbest(1,1,count);

end

if modulo(j,25)==0 // resetting graphics

clf(1)

scf(1)

gcf()

xtitle("Objective function value vs Iteration")

axe_prop=gca()

axe_prop.x_label.text="Iteration number"

axe_prop.y_label.text="Objective function value"

else

scf(1)

end

plot(Fbest_draw)

drawnow()

//---------------------------------------------------

// Temporary save in case of crash, very usefull for long time optimization

//---------------------------------------------------

save (’PSO_temp’)

//---------------------------------------------------

// Out of the optimization loop

//---------------------------------------------------

end

disp(’Fbest :’ + string(Fbest(:,:,j)))

disp(’Gbest :’ + string(gbest(:,:,j)))

save(’results_PSO_radius’)

endfunction

save(’results_PSO_radius’)

endfunction

BSG-Starcraft PSO

15

// Created by Sebastien Salmon

// M3M - UTBM

// sebastien[.]salmon[@]utbm[.]fr

// 2010

// released under CC-BY-NC-SA

function PSO_bsg_starcraft(objective,wmax,wmin,itmax,c1,c2,N,D,borne_sup,borne_inf,vitesse_min,vitesse_max)

lines(0)

//---------------------------------------------------

// Setting up graphics axis

//---------------------------------------------------

scf(1)

gcf()

xtitle("Objective function value vs Iteration")

axe_prop=gca()

axe_prop.x_label.text="Iteration number"

axe_prop.y_label.text="Objective function value"

//---------------------------------------------------

// Declaring objective function

//---------------------------------------------------

var=objective+’.sce’;

disp(var)

exec(var)

//---------------------------------------------------

// PSO parameters definition

//---------------------------------------------------

// using inertial weigth parameter improvement

//wmax=0.9; // initial weigth parameter

//wmin=0.4; // final weigth parameter

// max iteration allowed

//itmax=800; //Maximum iteration number

// knowledge factors

//c1=2; // for personnal best

//c2=2; // for global best

// problem dimensions

//N=20; // number of particles

//D=6; // problem dimension

//---------------------------------------------------

// Allocation of memory ans first computations

//---------------------------------------------------

// computation of the weigth vector

for i=1:itmax

W(i)=wmax-((wmax-wmin)/itmax)*i;

end

// computation of location and speed of particles

for i=1:D

16

//borne_sup(i)= 1;

//borne_inf(i)= 0;

x(1:N,i) =borne_inf(i) +rand(N,1) * ( borne_sup(i) - borne_inf(i) ); // location

//vitesse_min(i)=-0.3;

//vitesse_max(i)=0.3;

v(1:N,i)=vitesse_min(i)+(vitesse_max(i)-vitesse_min(i))*rand(N,1); // speed

end

// actual iteration number

j=1;

//---------------------------------------------------

// First evaluation of the objective function

//---------------------------------------------------

for i=1:N

y=x(i,:,j);

F(i,1,j)=script(y); // mono-objective result

end

//---------------------------------------------------

// Search for the minimum of the swarm

//---------------------------------------------------

[C,I]=min((F(:,1,j)));

//---------------------------------------------------

// The first minimun is the global minimum ’cause first

// iteration

//---------------------------------------------------

gbest(1,:,j)=x(I,:,j);

gbestc(1,:,j)=x(I,:,j);

v_gbestc(1,:,j)=v(I,:,j);

for p=1:N

G(p,:,j)=gbest(1,:,j); // creating a matrix of gbest, used for speed computation

end

//---------------------------------------------------

// The first minimun is the best result ’cause first

// iteration

//---------------------------------------------------

Fbest(1,1,j)=F(I,1,j); // global best

Fbestc(1,1,j)=F(I,1,j);

Fb(1,1,j)=F(I,1,j); // iteration best, used for comparison with global best

//---------------------------------------------------

// Each particle is her personnal best ’cause first

// first iteration

//---------------------------------------------------

for i=1:N

pbest(i,:,j)=x(i,:,j);

end

//---------------------------------------------------

// Speed and location computation for next iteration

//---------------------------------------------------

v(:,:,j+1)=W(j)*v(:,:,j)+c1*rand()*(pbest(:,:,j)-x(:,:,j))+c2*rand()*(G(:,:,j)-x(:,:,j)); // speed

x(:,:,j+1)=x(:,:,j)+v(:,:,j+1); // location

//---------------------------------------------------

// Entering to the optimization loop

17

//---------------------------------------------------

while (j<itmax-1)

j=j+1

//---------------------------------------------------

// BSG Starcraft ability of the swarm - 1/10 chance

// to enable capacity

//---------------------------------------------------

aleat=rand()

if aleat>=0.9 | j==2

disp(’Enabling Starcraft Protoss carrier at iteration ’+string(j))

// getting the leader of the previous iteration

// this is the carrier

x_carrier=gbestc(:,:,j-1);

F_carrier=Fbestc(:,:,j-1);

v_carrier=v_gbestc(:,:,j-1);

// creating some raptors to explore the space

// for a quite long range from the carrier

speed_multiplicator=2; // 2x faster than carrier

number_raptor=20; // sending 20 raptors

for i=1:number_raptor

v_raptor=speed_multiplicator*norm(v_carrier);

x_raptor(i,:)=x_carrier+(-1+2*rand(1,D))*v_raptor;

end

// evaluating positions of the raptor

disp(’Sending Raptors’)

for i=1:number_raptor

F_raptor(i)=script(x_raptor(i,:));

end

// Comparing performance of raptors to carrier

[C,I]=min(F_raptor);

if F_raptor(I)<Fbest(:,:,j-1)

gain=-1*(F_carrier-F_raptor(I));

disp(’Gain =’ + string(gain))

disp(’Enabling FTL jump’)

// jumping the swarm conserving geometry

jump_vector=x_raptor(I,:)-x_carrier;

for i=1:N

x(i,:,j)=x(i,:,j-1)+jump_vector;

end

else

disp(’FTL jump not required’)

end

// evaluation of the jumped swarm

for i=1:N

18

y=x(i,:,j);

F(i,1,j)=script(y);

end

else

//---------------------------------------------------

// Evaluation of the objective function

//---------------------------------------------------

for i=1:N

y=x(i,:,j);

F(i,1,j)=script(y);

end

end // end for bsg starcraft ability

//---------------------------------------------------

// Search for the minimum of the swarm

//---------------------------------------------------

[C,I]=min((F(:,:,j)));

//---------------------------------------------------

// Searching for global minimum

//---------------------------------------------------

gbest(1,:,j)=x(I,:,j); // hypothesis : this iteration is better than last one

gbestc(1,:,j)=x(I,:,j);

v_gbestc(1,:,j)=v(I,:,j);

Fb(1,1,j)=F(I,1,j); // looking for the iteration best result

Fbestc(1,1,j)=F(I,1,j);

Fbest(1,:,j)=Fb(1,:,j); // Fbest is the iteration best result

if Fbest(1,1,j)<Fbest(1,1,j-1) // check if actual Fbest is better than the previous one

else

gbest(1,:,j)=gbest(1,:,j-1); // if not then replacing with the good gbest

Fbest(1,:,j)=Fbest(1,:,j-1); // A new Fbest has not be found this time

end

for p=1:N

G(p,:,j)=gbest(1,:,j); // creating a matrix of gbest, used for speed computation

end

//---------------------------------------------------

// Computation of the new personnal best

//---------------------------------------------------

for i=1:N

[C,I]=min(F(i,1,:));

if F(i,1,j)<C

pbest(i,:,j)=x(i,:,j);

else

pbest(i,:,j)=x(i,:,I(3));

end

end

//---------------------------------------------------

// Re-computation of Fbest for reliability test

//---------------------------------------------------

y=gbest(1,:,j);

Fbest(:,:,j)=script(y);

//---------------------------------------------------

// Speed and location computation for next iteration

19

//---------------------------------------------------

v(:,:,j+1)=W(j)*v(:,:,j)+c1*rand()*(pbest(:,:,j)-x(:,:,j))+c2*rand()*(G(:,:,j)-x(:,:,j)); // speed

x(:,:,j+1)=x(:,:,j)+v(:,:,j+1); // location

//---------------------------------------------------

// Plotting the Fbest curve to monitor optimization

//---------------------------------------------------

for count=1:j

Fbest_draw(count)=Fbest(1,1,count);

end

if modulo(j,25)==0 // resetting graphics

clf(1)

scf(1)

gcf()

xtitle("Objective function value vs Iteration")

axe_prop=gca()

axe_prop.x_label.text="Iteration number"

axe_prop.y_label.text="Objective function value"

else

scf(1)

end

plot(Fbest_draw)

drawnow()

//---------------------------------------------------

// Temporary save in case of crash, very usefull for long time optimization

//---------------------------------------------------

save (’PSO_temp’)

//---------------------------------------------------

// Out of the optimization loop

//---------------------------------------------------

end

disp(’Fbest :’ + string(Fbest(:,:,j)))

disp(’Gbest :’ + string(gbest(:,:,j)))

save(’results_PSO_BSG-Starcraft’)

endfunction

BSG-Starcraft radius PSO// Created by Sebastien Salmon

// M3M - UTBM

// sebastien[.]salmon[@]utbm[.]fr

// 2010

// released under CC-BY-NC-SA

function PSO_bsg_starcraft_radius(objective,wmax,wmin,itmax,c1,c2,N,D,borne_sup,borne_inf,vitesse_min,vitesse_max,radius)

20

lines(0)

//---------------------------------------------------

// Setting up graphics axis

//---------------------------------------------------

scf(1)

gcf()

xtitle("Objective function value vs Iteration")

axe_prop=gca()

axe_prop.x_label.text="Iteration number"

axe_prop.y_label.text="Objective function value"

scf(2)

gcf()

xtitle("Swarm radius vs Iteration")

axe_prop=gca()

axe_prop.x_label.text="Iteration number"

axe_prop.y_label.text="Swarm radius (log10)"

//---------------------------------------------------

// Declaring objective function

//---------------------------------------------------

var=objective+’.sce’;

exec(var)

//---------------------------------------------------

// PSO parameters definition

//---------------------------------------------------

// swarm limit radius

// radius=1e-4; // limit radius of the swarm, if inf then stopping process

// using inertial weigth parameter improvement

//wmax=0.9; // initial weigth parameter

//wmin=0.4; // final weigth parameter

// max iteration allowed

//itmax=800; //Maximum iteration number

// knowledge factors

//c1=2; // for personnal best

//c2=2; // for global best

// problem dimensions

//N=20; // number of particles

//D=6; // problem dimension

//---------------------------------------------------

// Allocation of memory ans first computations

//---------------------------------------------------

// computation of the weigth vector

for i=1:itmax

W(i)=wmax-((wmax-wmin)/itmax)*i;

end

// computation of location and speed of particles

for i=1:D

//borne_sup(i)= 1;

//borne_inf(i)= 0;

21

x(1:N,i) =borne_inf(i) +rand(N,1) * ( borne_sup(i) - borne_inf(i) ); // location

//vitesse_min(i)=-0.3;

//vitesse_max(i)=0.3;

v(1:N,i)=vitesse_min(i)+(vitesse_max(i)-vitesse_min(i))*rand(N,1); // speed

end

// actual iteration number

j=1;

// actual mesurability status

mesurability=1;

counter=0;

//---------------------------------------------------

// First evaluation of the objective function

//---------------------------------------------------

for i=1:N

y=x(i,:,j);

F(i,1,j)=script(y); // mono-objective result

end

//---------------------------------------------------

// Search for the minimum of the swarm

//---------------------------------------------------

[C,I]=min((F(:,1,j)));

//---------------------------------------------------

// The first minimun is the global minimum ’cause first

// iteration

//---------------------------------------------------

gbest(1,:,j)=x(I,:,j);

gbestc(1,:,j)=x(I,:,j);

v_gbestc(1,:,j)=v(I,:,j);

for p=1:N

G(p,:,j)=gbest(1,:,j); // creating a matrix of gbest, used for speed computation

end

//---------------------------------------------------

// The first minimun is the best result ’cause first

// iteration

//---------------------------------------------------

Fbest(1,1,j)=F(I,1,j); // global best

Fbestc(1,1,j)=F(I,1,j);

Fb(1,1,j)=F(I,1,j); // iteration best, used for comparison with global best

//---------------------------------------------------

// Each particle is her personnal best ’cause first

// first iteration

//---------------------------------------------------

for i=1:N

pbest(i,:,j)=x(i,:,j);

end

//---------------------------------------------------

// Speed and location computation for next iteration

//---------------------------------------------------

v(:,:,j+1)=W(j)*v(:,:,j)+c1*rand()*(pbest(:,:,j)-x(:,:,j))+c2*rand()*(G(:,:,j)-x(:,:,j)); // speed

x(:,:,j+1)=x(:,:,j)+v(:,:,j+1); // location

22

//---------------------------------------------------

// Entering to the optimization loop

//---------------------------------------------------

while (j<itmax-1 & mesurability==1)

j=j+1

//---------------------------------------------------

// BSG Starcraft ability of the swarm - 1/10 chance

// to enable capacity

//---------------------------------------------------

aleat=rand()

if aleat>=0.9

disp(’Enabling Starcraft Protoss carrier at iteration ’+string(j))

// getting the leader of the previous iteration

// this is the carrier

x_carrier=gbestc(:,:,j-1);

F_carrier=Fbestc(:,:,j-1);

v_carrier=v_gbestc(:,:,j-1);

// creating some raptors to explore the space

// for a quite long range from the carrier

speed_multiplicator=2; // 2x faster than carrier

number_raptor=20; // sending 20 raptors

for i=1:number_raptor

v_raptor=speed_multiplicator*norm(v_carrier);

x_raptor(i,:)=x_carrier+(-1+2*rand(1,D))*v_raptor;

end

// evaluating positions of the raptor

disp(’Sending Raptors’)

for i=1:number_raptor

F_raptor(i)=script(x_raptor(i,:));

end

// Comparing performance of raptors to carrier

[C,I]=min(F_raptor);

if F_raptor(I)<Fbest(:,:,j-1)

gain=-1*(F_carrier-F_raptor(I));

disp(’Gain =’ + string(gain))

disp(’Enabling FTL jump’)

// jumping the swarm conserving geometry

jump_vector=x_raptor(I,:)-x_carrier;

for i=1:N

x(i,:,j)=x(i,:,j-1)+jump_vector;

end

end

disp(’FTL jump not required’)

// evaluation of the jumped swarm

for i=1:N

23

y=x(i,:,j);

F(i,1,j)=script(y);

end

else

//---------------------------------------------------

// Evaluation of the objective function

//---------------------------------------------------

for i=1:N

y=x(i,:,j);

F(i,1,j)=script(y);

end

end // end for bsg starcraft ability

//---------------------------------------------------

// Search for the minimum of the swarm

//---------------------------------------------------

[C,I]=min((F(:,:,j)));

//---------------------------------------------------

// Searching for global minimum

//---------------------------------------------------

gbest(1,:,j)=x(I,:,j); // hypothesis : this iteration is better than last one

gbestc(1,:,j)=x(I,:,j);

v_gbestc(1,:,j)=v(I,:,j);

Fb(1,1,j)=F(I,1,j); // looking for the iteration best result

Fbestc(1,1,j)=F(I,1,j);

Fbest(1,:,j)=Fb(1,:,j); // Fbest is the iteration best result

if Fbest(1,1,j)<Fbest(1,1,j-1) // check if actual Fbest is better than the previous one

else

gbest(1,:,j)=gbest(1,:,j-1); // if not then replacing with the good gbest

Fbest(1,:,j)=Fbest(1,:,j-1); // A new Fbest has not be found this time

end

for p=1:N

G(p,:,j)=gbest(1,:,j); // creating a matrix of gbest, used for speed computation

end

//---------------------------------------------------

// Computation of the new personnal best

//---------------------------------------------------

for i=1:N

[C,I]=min(F(i,1,:));

if F(i,1,j)<C

pbest(i,:,j)=x(i,:,j);

else

pbest(i,:,j)=x(i,:,I(3));

end

end

//---------------------------------------------------

// Re-computation of Fbest for reliability test

//---------------------------------------------------

y=gbest(1,:,j);

Fbest(:,:,j)=script(y);

//---------------------------------------------------

// Speed and location computation for next iteration

24

//---------------------------------------------------

v(:,:,j+1)=W(j)*v(:,:,j)+c1*rand()*(pbest(:,:,j)-x(:,:,j))+c2*rand()*(G(:,:,j)-x(:,:,j)); // speed

x(:,:,j+1)=x(:,:,j)+v(:,:,j+1); // location

//---------------------------------------------------

// Computing measurability - generating capacity

//---------------------------------------------------

[C,I]=min((F(:,:,j)));

leader_temp(1,:,j)=x(I,:,j); // getting swarm leader

for t=1:N

dist_temp(:,t,j) = abs((x(t,:,j)-leader_temp(:,:,j))); // computing L1 norm to leader

end

for t=1:N

dist(t,j)=max(dist_temp(:,t,j));

end

max_dist(j)=max(dist(:,j));

if max_dist(j)>=radius & counter<10

mesurability=1;

counter=0;

else

counter=counter+1;

end

if counter==10

mesurability=0;

end

rad_plot(j)=radius;

if modulo(j,25)==0 // resetting graphics

clf(2)

scf(2)

gcf()

xtitle("Swarm radius vs Iteration")

axe_prop=gca()

axe_prop.x_label.text="Iteration number"

axe_prop.y_label.text="Swarm radius (log10)"

else

scf(2)

end

plot(log10(max_dist(2:j)) )

plot(log10(rad_plot(2:j)),’r’)

drawnow()

//---------------------------------------------------

// Plotting the Fbest curve to monitor optimization

//---------------------------------------------------

for count=1:j

Fbest_draw(count)=Fbest(1,1,count);

end

if modulo(j,25)==0 // resetting graphics

clf(1)

scf(1)

25

gcf()

xtitle("Objective function value vs Iteration")

axe_prop=gca()

axe_prop.x_label.text="Iteration number"

axe_prop.y_label.text="Objective function value"

else

scf(1)

end

plot(Fbest_draw)

drawnow()

//---------------------------------------------------

// Temporary save in case of crash, very usefull for long time optimization

//---------------------------------------------------

save (’PSO_temp’)

//---------------------------------------------------

// Out of the optimization loop

//---------------------------------------------------

end

disp(’Fbest :’ + string(Fbest(:,:,j)))

disp(’Gbest :’ + string(gbest(:,:,j)))

save(’results_PSO_BSG_Starcraft_radius’)

endfunction

References[1] Van den Bergh F and Engelbrecht A.P. A study of particle swarm optimization particle

trajectories. Information Sciences, 2006.

[2] Trelea I.C. The particle swarm optimization algorithm: convergence analysis and parameterselection. Information Processing Letters, vol. 85, 2003.

[3] S. Janson and M. Middendorf. On trajectories of particles in pso. In Proceedings of the 2007IEEE Swarm Intelligence Symposium, 2007.

[4] J. Kennedy and R.C. Eberhart. Particle swarm optimisation. In Proceedings of the IEEEInternational Conference on Neural Networks, 1995.

[5] M. Molga and C. Smutnicki. Test functions for optimization needs.http://www.zsd.ict.pwr.wroc.pl/files/docs/functions.pdf, 2005.

[6] Engelbrecht A.P Van der Bergh F. A study of particle swarm optimization particle trajecto-ries. Information Sciences, 176:937–971, 2006.

[7] Shi Y. and Eberhart RC. Parameter selection in particle swarm optimization. In AnnualConference on. Evolutionary Programming, 1998.

26