8
Hybridizing PSO and DE for improved vector evaluated multi-objective optimization Jacomine Grobler, Student Member, IEEE, and Andries P. Engelbrecht, Senior Member, IEEE, Abstract— This paper introduces a new vector evaluated multi-objective optimization algorithm. The vector evaluated differential evolution particle swarm optimization (VEDEPSO) algorithm is a hybridization of the classical vector evaluated particle swarm optimization (VEPSO) and vector evaluated differential evolution (VEDE) algorithms of Parsopoulos et. al. [9], [10]. Comparisons of VEDEPSO with respect to VEPSO and VEDE on a well known multi-objective benchmark problem set indicated that significant performance improvements can be attributed to the VEDEPSO algorithm. I. I NTRODUCTION The existence of more than one decision maker in real world decision making has received more and more attention in recent years. A large number of optimization problems can be solved more effectively by considering more than one objective function. These multiple objectives are often conflicting and it is thus more desirable to provide a set of feasible solutions representing trade-offs between the various objectives to be achieved instead of one single solution. To achieve this objective, various approaches to multi-objective optimization have already been developed [8], [12], [15] [17]. One such approach is the vector evaluated multi-objective optimization approach introduced by Schaffer [13], and further developed by Parsopoulos et. al. [9], [10]. This approach has already been shown to be inherently computationally simple and easy for decision makers to understand. However, various analyses into the performance of vector evaluated particle swarm optimization (VEPSO) and vector evaluated differential evolution (VEDE) have shown that these two algorithms tend to be relatively problem specific and distinct variations existed with respect to the algorithms’ ability to exploit good solutions, while simultaneously exploring the rest of the search space [3]. Both the VEPSO and VEDE algorithms each have their own strengths and weaknesses. Previous comparisons of VEPSO and VEDE indicated that VEDE seemed to focus on further exploitation of good solutions at the cost of Pareto front diversity, whereas PSO favours greater exploration of the search space resulting in more diverse pareto fronts [3]. The focus of this paper is to capitalize on the desirable characteristics of both algorithms by developing a new hybrid algorithm. This new algorithm, named the vector evaluated differential evolution particle swarm optimization (VEDEPSO) algorithm is introduced in this paper. Four variations on the VEDEPSO algorithm were implemented Jacomine Grobler is with the Department of Industrial and Systems Engineering and Andries P. Engelbrecht is with the Department of Computer Science at the University of Pretoria, South Africa (corresponding author to provide e-mail: [email protected]). and comparisons with respect to the existing VEPSO and VEDE algorithms on a well-known multi-objective bench- mark problem set [18] showed promise. The rest of the paper is organized as follows: Section II introduce the existing multi-objective optimization literature on which this work is based. Sections III and IV, respec- tively, introduce the particle swarm optimization (PSO) and differential evolution (DE) algorithms. The VEDEPSO-based algorithms are introduced in Section V and Section VI discusses the experimental conditions and provide the results of the empirical evaluation. Finally, the paper is concluded with Section VII. II. MULTI - OBJECTIVE OPTIMIZATION Over the years, a large amount of work has been done in the field of multi-objective optimization. This section provides an introduction into the basic concepts underlying the optimization of multiple objectives. A multi-objective optimization (MOO) problem can be formally defined as follows: minimize f (x) subject to ς p (x) 0, p =1,...,n ς q (x)=0, q = n ς +1,...,n ς + n x [x min ,x max ] nx (1) where f (x) denotes the vector of objective functions to be minimized, ς p and q are respectively the inequality and equality constraints and x [x min ,x max ] nx represents the boundary constraints [11]. A solution to a MOO problem can thus be defined as a vector x that satisfies the constraints and optimizes the vector function f (x) [18]. The purpose of MOO is to find a set of trade-off solutions referred to as the Pareto-optimal set, P , where P ={x | x : x x x x} (2) and the dominance relation, , indicates that solution x dominates solution x, i.e. x is not worse than x in all objectives and x is strictly better than x in at least one objective. III. PARTICLE SWARM OPTIMIZATION Since its humble beginnings in 1995 [5], PSO has es- tablished itself as a simple and computationally efficient optimization method in both the fields of artificial intel- ligence and mathematical optimization. The rest of this section introduces the basic concepts of PSO before the actual algorithm implementation and the vector evaluated PSO algorithm is discussed in more detail. 1255 978-1-4244-2959-2/09/$25.00 c 2009 IEEE

[IEEE 2009 IEEE Congress on Evolutionary Computation (CEC) - Trondheim, Norway (2009.05.18-2009.05.21)] 2009 IEEE Congress on Evolutionary Computation - Hybridizing PSO and DE for

Embed Size (px)

Citation preview

Page 1: [IEEE 2009 IEEE Congress on Evolutionary Computation (CEC) - Trondheim, Norway (2009.05.18-2009.05.21)] 2009 IEEE Congress on Evolutionary Computation - Hybridizing PSO and DE for

Hybridizing PSO and DE for improved vector evaluatedmulti-objective optimization

Jacomine Grobler, Student Member, IEEE, and Andries P. Engelbrecht, Senior Member, IEEE,

Abstract— This paper introduces a new vector evaluatedmulti-objective optimization algorithm. The vector evaluateddifferential evolution particle swarm optimization (VEDEPSO)algorithm is a hybridization of the classical vector evaluatedparticle swarm optimization (VEPSO) and vector evaluateddifferential evolution (VEDE) algorithms of Parsopoulos et.al. [9], [10]. Comparisons of VEDEPSO with respect to VEPSOand VEDE on a well known multi-objective benchmark problemset indicated that significant performance improvements can beattributed to the VEDEPSO algorithm.

I. INTRODUCTION

The existence of more than one decision maker in realworld decision making has received more and more attentionin recent years. A large number of optimization problemscan be solved more effectively by considering more thanone objective function. These multiple objectives are oftenconflicting and it is thus more desirable to provide a set offeasible solutions representing trade-offs between the variousobjectives to be achieved instead of one single solution.

To achieve this objective, various approachesto multi-objective optimization have already beendeveloped [8], [12], [15] [17]. One such approach isthe vector evaluated multi-objective optimization approachintroduced by Schaffer [13], and further developed byParsopoulos et. al. [9], [10]. This approach has already beenshown to be inherently computationally simple and easy fordecision makers to understand. However, various analysesinto the performance of vector evaluated particle swarmoptimization (VEPSO) and vector evaluated differentialevolution (VEDE) have shown that these two algorithmstend to be relatively problem specific and distinct variationsexisted with respect to the algorithms’ ability to exploitgood solutions, while simultaneously exploring the rest ofthe search space [3].

Both the VEPSO and VEDE algorithms each have theirown strengths and weaknesses. Previous comparisons ofVEPSO and VEDE indicated that VEDE seemed to focus onfurther exploitation of good solutions at the cost of Paretofront diversity, whereas PSO favours greater exploration ofthe search space resulting in more diverse pareto fronts [3].The focus of this paper is to capitalize on the desirablecharacteristics of both algorithms by developing a newhybrid algorithm. This new algorithm, named the vectorevaluated differential evolution particle swarm optimization(VEDEPSO) algorithm is introduced in this paper. Fourvariations on the VEDEPSO algorithm were implemented

Jacomine Grobler is with the Department of Industrial and SystemsEngineering and Andries P. Engelbrecht is with the Department of ComputerScience at the University of Pretoria, South Africa (corresponding author toprovide e-mail: [email protected]).

and comparisons with respect to the existing VEPSO andVEDE algorithms on a well-known multi-objective bench-mark problem set [18] showed promise.

The rest of the paper is organized as follows: Section IIintroduce the existing multi-objective optimization literatureon which this work is based. Sections III and IV, respec-tively, introduce the particle swarm optimization (PSO) anddifferential evolution (DE) algorithms. The VEDEPSO-basedalgorithms are introduced in Section V and Section VIdiscusses the experimental conditions and provide the resultsof the empirical evaluation. Finally, the paper is concludedwith Section VII.

II. MULTI-OBJECTIVE OPTIMIZATION

Over the years, a large amount of work has been donein the field of multi-objective optimization. This sectionprovides an introduction into the basic concepts underlyingthe optimization of multiple objectives.

A multi-objective optimization (MOO) problem can beformally defined as follows:

minimize fff(xxx)subject to ςp(xxx) ≤ 0, p = 1, . . . , nς

�q(xxx) = 0, q = nς + 1, . . . , nς + n�

xxx ∈ [xxxmin,xxxmax]nx (1)

where fff(xxx) denotes the vector of objective functions to beminimized, ςp and �q are respectively the inequality andequality constraints and xxx ∈ [xxxmin,xxxmax]nx represents theboundary constraints [11]. A solution to a MOO problem canthus be defined as a vector xxx that satisfies the constraints andoptimizes the vector function fff(xxx) [18].

The purpose of MOO is to find a set of trade-off solutionsreferred to as the Pareto-optimal set, P , where

P ={xxx∗ ∈ �| � xxx ∈ � : x∗x∗x∗ ≺ xxx} (2)

and the dominance relation, ≺, indicates that solution xxx∗

dominates solution xxx, i.e. xxx∗ is not worse than xxx in allobjectives and xxx∗ is strictly better than xxx in at least oneobjective.

III. PARTICLE SWARM OPTIMIZATION

Since its humble beginnings in 1995 [5], PSO has es-tablished itself as a simple and computationally efficientoptimization method in both the fields of artificial intel-ligence and mathematical optimization. The rest of thissection introduces the basic concepts of PSO before theactual algorithm implementation and the vector evaluatedPSO algorithm is discussed in more detail.

1255978-1-4244-2959-2/09/$25.00 c© 2009 IEEE

Page 2: [IEEE 2009 IEEE Congress on Evolutionary Computation (CEC) - Trondheim, Norway (2009.05.18-2009.05.21)] 2009 IEEE Congress on Evolutionary Computation - Hybridizing PSO and DE for

A. The basic algorithm

The PSO algorithm represents each potential problemsolution by the position of a particle in multi-dimensionalhyperspace. Throughout the optimization process velocityand displacement updates are applied to each particle to moveit to a different position and thus a different solution in thesearch space. The gbest model [5] calculates the velocity ofparticle i in dimension j at time t + 1 using

vij(t + 1) =wvij(t) + c1r1j(t)[x̂ij(t) − xij(t)]+ c2r2j(t)[x∗

j (t) − xij(t)] (3)

where vij(t) represents the velocity of particle i in dimensionj at time t, c1 and c2 are the cognitive and social accelerationconstants, x̂ij(t) and xij(t) respectively denotes the personalbest position (pbest) and the position of particle i in dimen-sion j at time t. x∗

j (t) denote the global best position (gbest)in dimension j, w refers to the inertia weight, and r1j(t)and r2j(t) are sampled from a uniform random distribution,U(0, 1).

The displacement of particle i at time t is simply derivedfrom the calculation of vij(t+1) in equation (3) and is givenas

xij(t + 1) =xij(t) + vij(t + 1) (4)

This simultaneous movement of particles towards theirown previous best solutions (pbest) and the best solutionfound by the entire swarm (gbest) results in the particlesconverging to one or more good solutions in the searchspace [6].

B. The guaranteed convergence PSO algorithm

Unfortunately, the basic PSO algorithm has a potentiallydangerous property. The algorithm is “driven” by the fact thatas a particle moves through the decision space, it is alwaysattracted towards its pbest value and the flock’s gbest value.If any of the particles reach a position in the search spacewhere

x̂̂x̂x =xxx(t) = xxx∗ (5)

only the momentum term (wvij(t) in equation (3)) remainsto act as a driving force for the specific particle to continueexploring the rest of the search space. However, if thecondition described in equation (5) persists, it can result inthe swarm stagnating on a solution which is not necessarilya local optimum [16]. The guaranteed convergence particleswarm optimization (GCPSO) algorithm [16] has been shownto address this problem effectively and have thus been usedfor all simulations in this paper. This algorithm requires thatdifferent velocity and displacement updates, defined as

vτj(t + 1) = − xτj(t) + x∗j (t) + wvτj(t)

+ ρ(t)(1 − 2rj(t)) (6)

and

xτj(t + 1) =x∗j (t) + wvτj(t) + ρ(t)(1 − 2rj(t)), (7)

are applied to the global best particle, where ρ(t) is a time-dependent scaling factor, rj(t) is sampled from a uniformrandom distribution, U(0, 1), and all other particles areupdated by means of equations (3) and (4). This algorithmforces the gbest particle into a random search around theglobal best position. The size of the search space is thenadjusted on the basis of the number of consecutive successesor failures of the particle, where success is defined as animprovement in the objective function value.

C. The vector evaluated PSO algorithm

The vector evaluated PSO algorithm was first introducedby Parsopoulos et. al. [9]. Their work focused on com-paring the vector evaluated PSO (VEPSO) algorithm tovarious aggregation-based approaches including bang-bangand dynamic weighted aggregation. This study used two-objective problems from the well-known MOO benchmarkset of Zitzler et. al. [18].

VEPSO can be considered as a multi-population opti-mization technique where each subswarm is assigned one ofthe objective functions to optimize. An alternative velocityupdate equation is then used where (considering two swarms)the global best position of the first swarm is used in thevelocity equation of the second swarm and vice versa. Thus,

vijs(t + 1) = wvijs(t) + c1r1j(t)[x̂ijs(t) − xijs(t)]+c2r2j(t)[x∗

jms(t) − xijs(t)] (8)

where vijs(t), xijs(t) and x̂ijs(t), respectively denote thevelocity, position and personal best position of the jth di-mension of the ith particle of swarm s at time t, and x∗

jms(t)

is the jth dimension of the global best position of swarm ms

at time t. Since the velocity update incorporates informationof good solutions with respect to all objective functions,convergence to the Pareto front should be imminent.

IV. DIFFERENTIAL EVOLUTION

Differential evolution (DE) finds its roots in the geneticannealing algorithm of Storn and Price [14]. Since its devel-opment in 1995, the number of DE research papers increasedsignificantly every year and DE is now well-known in theevolutionary computation community as an alternative totraditional EAs. The algorithm is easy to understand, simpleto implement, reliable, and fast [14]. Similar to the previoussection, the rest of this section first introduces the basicconcepts of DE before the basic algorithm, actual algorithmimplementation, and the vector evaluated DE algorithm arediscussed in more detail.

A. The basic algorithm

The success of DE can be mainly attributed to the useof a difference vector which determines the step size of thealgorithm as the population, consisting of vectors of floatingpoint numbers, moves through the search space. Informationregarding the difference between two existing solutions is,in other words, used to guide the algorithm towards bettersolutions [14].

1256 2009 IEEE Congress on Evolutionary Computation (CEC 2009)

Page 3: [IEEE 2009 IEEE Congress on Evolutionary Computation (CEC) - Trondheim, Norway (2009.05.18-2009.05.21)] 2009 IEEE Congress on Evolutionary Computation - Hybridizing PSO and DE for

More specifically, for each individual, i, in the population,a base vector, xxxi1(t), as well as two other vectors, xxxi2(t) andxxxi3(t), are randomly selected from the current population,where xij(t) denotes the jth dimension of individual i ofgeneration t and i �= i1 �= i2 �= i3. The target vector, TTT i,is obtained through the application of a differential mutationoperator defined as

Tij(t) =xi1j(t) + F (xi2j(t) − xi3j(t)) (9)

Then, for all dimensions, j, if r ∼ U(0, 1) ≤ pr or j = ν ∼

U(1, ..., nx)

cij(t) =Tij(t) (10)

otherwise cij(t) = xij(t), where pr is the probability ofreproduction, nx is the number of dimensions, F is thescaling factor, and ccci is known as the trial vector.

An individual may only be replaced by an individual witha better fitness function value. In other words, if the fitnessof ccci(t) is better than the fitness of the ith individual of theoriginal population, this individual is replaced by ccci(t) [14].

Similar to the PSO algorithm, a number of variations ofdifferential evolution have also been developed in recentyears [14]. The rate of convergence and subsequent decreasein population diversity can be controlled further by meansof different trial vector selection mechanisms. For example,DE/rand/bin selects xxxr1(t) randomly from the previous pop-ulation and thereby maintains diversity for longer, whereasDE/best/bin selects xxxr1(t) as the population member withthe lowest fitness function and subsequently obtains fasterconvergence. As recommended by Parsopoulous et. al. [10],DE/best/bin was used for all experiments conducted in thispaper.

B. The vector evaluated DE algorithm

The VEDE algorithm (refer to Algorithm 1) differ onlyfrom the VEPSO algorithm with respect to the informationexchange strategy used. The population is divided into anumber of sub-populations equal to the number of objec-tives to be optimized and each population is allocated anobjective function to optimize. The difference between thetwo algorithms lies in the fact that VEDE does not directlyupdate its operators to contain information obtained fromany of the other swarms. Instead the best individual of eachpopulation is migrated between different populations and anadditional dominance-based selection mechanism is used, i.e.an individual can only be replaced by a dominating offspring.

V. THE VECTOR EVALUATED DEPSO ALGORITHM

As the name implies, the VEDEPSO, is simply a hy-bridization of the VEPSO and VEDE algorithms. Similarto VEPSO and VEDE, VEDEPSO makes use of multiplepopulations where each population is assigned an objectivefunction to optimize. For the bi-objective problems consid-ered in this paper, one population makes use of the velocityupdates associated with the classic VEPSO algorithm as de-fined in Equation (8), whereas the other population is updatedin a similar fashion to the VEDE algorithm. However, in

Algorithm 1: The vector evaluated differential evolution(VEDE) algorithm

Initialize two populations, XXX1 and XXX21

Initialize the external archive as an empty set2

t = 13

while t < Imax do4

for All populations s do5

// Evaluate the fitness function of each6

population w.r.t. its allocated objectiveUpdate the external archive to include all7

non-dominated solutions in XXXs

f(x∗sx∗sx∗s(t)) = mini{fs(xsxsxs(t))}8

Update the external archive to include all9

non-dominated solutions in XXXs

end10

for All populations s do11

// Move the best individual of population ms to12

population sx∗

sx∗sx∗s(t) = x∗

msx∗

msx∗

ms(t)13

end14

for All populations s do15

for All individuals i do16

Randomly select 3 individuals from17

population s, xi1sxi1sxi1s(t), xi2sxi2sxi2s(t) and xi3sxi3sxi3s(t),such that i1 �= i2 �= i3 �= iRandomly select one of the dimensions ν18

for All dimensions j do19

if r ∼ U(0, 1) ≤ pc or j = ν then20

cijs =21

xi3js(t) + F (xi1js(t) − xi3js(t))else22

cijs = xijs(t)23

end24

end25

// Only replace xisxisxis(t) by a dominating26

individualif cisciscis ≺ xisxisxis(t) then27

xisxisxis(t) = cisciscis28

end29

end30

end31

t = t + 132

end33

order to enable effective information exchange between thedifferent populations, the differential mutation operator ofEquation (9), was updated to

Tij(t) =x∗msj(t) + F (xi2j(t) − xi3j(t)). (11)

This change made the migration of individuals between pop-ulations redundant. However, the dominance-based selectionmechanism was still retained for all DE-based populations.

Another improvement with respect to information ex-change between the various populations was also incorpo-rated into the VEDEPSO algorithm as discussed next. It is

2009 IEEE Congress on Evolutionary Computation (CEC 2009) 1257

Page 4: [IEEE 2009 IEEE Congress on Evolutionary Computation (CEC) - Trondheim, Norway (2009.05.18-2009.05.21)] 2009 IEEE Congress on Evolutionary Computation - Hybridizing PSO and DE for

important to note that both VEPSO and VEDE makes useof a ring structure for information exchange purposes. Aspecific swarm or population, XXXs, always uses informationregarding the best solution from one specific neighbouringswarm or population, XXX(s+1 mod s), to update its solutions.Since each population is associated with the optimizationof one specific criterion, this results in each populationonly considering two criteria at a time. Fortunately, theconnectedness of the ring structure ensures that informationwith respect to the other criteria is eventually propagated toall populations. However, the fact that information exchangealways occurs between the same pairs of populations couldhave a definite negative effect on the pace of this propagation.

In the VEDEPSO algorithm, each population randomly se-lects the second population, which may be itself, from whichinformation is used to update the solutions. Equation (8) isthus updated such that ms ∼ U(1, ..., np) where np is thetotal number of populations. Even though only two criteriaare still optimized at a time, the criteria change during eachiteration, leading to faster information propagation. There isa now a nonzero probability that each population will bedirectly influenced by all objective functions.

A graphical comparison of the classical VEPSO andVEDE and VEDEPSO information exchange mechanismsis provided in Figure 1. Here the solid lines represent thepossible sources from which information of good solutionscan be obtained by population XXXs in a single iteration.The dotted lines indicate possible paths of information flowbetween the other populations.

Three additional variations on the VEDEPSO algorithmwere also implemented in this paper. VEDEPSO(2) selects arandom individual k from the neighbouring swarm, instead ofthe gbest individual, for information exchange purposes. Theidea is to reduce the rate of information propagation throughthe different populations resulting in slower convergence tothe optimal Pareto front.

The velocity update of the PSO-based population changesto

vijs(t) = wvijs(t) + c1r1j(x̂ijs(t) − xijs(t))+ (12)

c2r2j(x̂kjms(t) − xijs(t)), (13)

and the differential mutation operator becomes

cijs = x̂kjms(t) + F (xi1js(t) − xi3js(t)). (14)

VEDEPSO(3) uses Equation (8) and (9) for updating,respectively, the PSO and DE-based populations. In otherwords, the standard VEPSO velocity updates and standarduni-objective DE updates are used. However, the crossoveroperator is applied using the gbest individual of the neigh-bouring swarm. Similarly, VEDEPSO(4) investigates theimpact of using a randomly selected pbest position fromthe neighbouring swarm for the purposes of applying thecrossover operator.

VI. EMPIRICAL EVALUATION

To investigate the contribution of the VEDEPSO algo-rithm, an empirical comparison between the VEDEPSO and

X

X

X

VEPSO and VEDE

VEDEPSO

s + 1

s

s

Information flow associated with swarm XOther information flows

s

Fig. 1. A graphical comparison of the information exchange strategiesof the VEPSO and VEDE algorithms with the VEDEPSO informationexchange strategy.

the classical VEDE and VEPSO algorithms were conducted.This section describes the benchmark problem set, algorithmcontrol parameters and other experimental conditions beforethe results of the algorithm comparison are presented anddiscussed.

For performance evaluation purposes, five multi-objectivebenchmark problems were taken from the work of Zitzleret. al. [18] to correspond with the benchmark problem setwhich Parsopoulos et. al. [9] used during the developmentof the VEPSO algorithm.

For the simulations in this section, the algorithm controlparameters were selected based on values used by Parsopou-los et. al. [9]. The number of individuals in a population, orparticles in a swarm, is denoted by ns in Table I and m −→ nindicates that the associated parameter is decreased linearlyfrom m to n over 95% of the total number of iterations, Imax.Furthermore, all individuals were initialized in the interval[0, 1] and an archive of unlimited size [4] was used to storeall non-dominated solutions as the algorithms progressed

1258 2009 IEEE Congress on Evolutionary Computation (CEC 2009)

Page 5: [IEEE 2009 IEEE Congress on Evolutionary Computation (CEC) - Trondheim, Norway (2009.05.18-2009.05.21)] 2009 IEEE Congress on Evolutionary Computation - Hybridizing PSO and DE for

Algorithm 2: The vector evaluated DEPSO (VEDEPSO)algorithm (VEDEPSO(1))

Initialize two swarms, XXX1 and XXX21

Initialize the external archive as an empty set2

t = 13

while t < Imax do4

for All swarms s do5

// Evaluate the fitness function of each swarm6

w.r.t. its allocated objectivef(x̂sx̂sx̂s(t)) = mini{fs(xsxsxs(t))}7

for All individuals i in swarm XXXs do8

if fs(xisxisxis(t)) < fs(x̂isx̂isx̂is(t)) then9

// Set the personal best position10

x̂isx̂isx̂is(t) = xisxisxis(t)11

end12

end13

if mini{fs(x̂sx̂sx̂s(t))} < fs(x∗sx∗sx∗s(t)) then14

// Set the global best position of each swarm15

w.r.t. its allocated objectivef(x∗

sx∗sx∗s(t)) = mini{fs(x̂sx̂sx̂s(t))}16

end17

Update the external archive to include all18

non-dominated solutions in XXXs

end19

Update all particles of swarm XXX1 using the standard20

VEPSO updates defined in Equation (8).Update all particles of swarm XXX2 using the standard21

VEDE updates defined in lines 16 to 30 ofAlgorithm 1, but replace xxx∗

js with xxx∗jms

.t = t + 122

end23

towards the Pareto front.

TABLE I

VEPSO AND VEDE CONTROL PARAMETERS

Parameter Value used

General parameters

ns 20

Imax 200

PSO-specific parameters

c1 0.5

c2 0.5

w 1 −→ 0.4

DE-specific parameters

pc 0.9

F 0.7

Three performance measures were used in this paper tocompare the different vector evaluated strategies. The S-metric [7], [19] measures the size of the region dominated bythe Pareto front based on a reference vector consisting of themaximum value in each objective ({1, 15} for the purposesof this paper). The S-metric of the set, PFPFPF , with respect tofffref can then be described as the Lebesgue integral of the

set R(PFPFPF ,fffref ), where

R(PFPFPF ,fffref ) = ∪∀fff∈PFPFPF R(fff,fffref ) (15)

and

R(fff,fffref ) ={f ′f ′f ′|f ′f ′f ′ ≺ fffref and fff ≺ f ′f ′f ′, f ′f ′f ′ ∈ Rnk} (16)

The other two measures include the size of the approx-imated Pareto fronts measured in terms of the number ofnon-dominated solutions obtained (N(PFPFPF )) as well as theextent of the Pareto fronts (χ(PFPFPF )) [2], where

χ(PFPFPF ) =

√√√√

N(PFPFPF )∑

k=1

max{|fk(xxx) − f ′k(xxx)| : fff,f ′f ′f ′ ∈ PFPFPF}.

(17)

The actual results of the algorithm comparison arerecorded in Tables II and III, where the VEPSO, VEDE,and VEDEPSO-based algorithm results were each recordedover 30 independent simulation runs. Two example Paretofronts are also provided in Figures 2 and 3. Throughout therest of this section, μ and σ respectively denote the meanand standard deviation associated with the correspondingperformance measure and CI0.05 denotes the 95% confidenceinterval on μ.

Fig. 2. A sample Pareto front obtained by VEDEPSO(1) on FFF 3(n = 30),where f1 and f2 denotes the first and second objective functions to beminimized

The results indicated that one or more of the VEDEPSO-based algorithms outperformed the VEPSO and VEDE algo-rithms with respect to all performance metrics for four outof the five problems tested. This finding indicate that a per-formance improvement can be attributed to the VEDEPSO-based algorithms. The improvement was most significant forfunction F1, which is characterized by a convex, uniformPareto front. Here VEDEPSO(2) was the best performingVEDEPSO variation. Superior algorithm performance withrespect to the S-metric and the extent of the Pareto frontsobtained indicated that VEDEPSO(2) was capable of gener-ating high quality Pareto fronts of sufficient diversity.

Although the VEDEPSO-based algorithms in most casesoutperformed the VEPSO and VEDE algorithms, no clearconclusion could be drawn with respect to the superior-ity of the different versions of the VEDEPSO algorithm.

2009 IEEE Congress on Evolutionary Computation (CEC 2009) 1259

Page 6: [IEEE 2009 IEEE Congress on Evolutionary Computation (CEC) - Trondheim, Norway (2009.05.18-2009.05.21)] 2009 IEEE Congress on Evolutionary Computation - Hybridizing PSO and DE for

Fig. 3. A sample Pareto front obtained by VEDEPSO(1) on FFF 5(n = 30),where f1 and f2 denotes the first and second objective functions to beminimized

VEDEPSO(3) was the best performing algorithm for thenon-uniform convex function, F2, VEDEPSO(1) performedwell on the uniform concave function, F3, and VEDEPSO(4)outperformed all other algorithms on function, F4. Com-parable performance between VEPSO and the VEDEPSO-based algorithms were obtained on problem F5. However,no significant performance improvement could be attributedto the VEDEPSO algorithm on any of the data sets orwith respect to any of the performance measurements. Thisindicates the existence of a future research opportunity in thedevelopment of an improved vector evaluated optimizationalgorithm for solving problems with separated Pareto fronts.

Another outcome of the algorithm comparison was thatthe VEDEPSO-based algorithms were always able to signif-icantly improve upon the worst performing vector evaluatedalgorithm. This is significant since it is not always possibleto predict which algorithm will be the best performingalgorithm when VEPSO and VEDE is considered. SinceVEDEPSO attempts to combine the best characteristics ofeach algorithm, both problems which are solved more easilyby VEDE and problems solved more easily by VEPSO canbe, on average, solved more effectively.

VII. CONCLUSION

This paper introduced a new vector evaluated multi-objective optimization algorithm, namely the vector evalu-ated DEPSO algorithm. This algorithm combines the positivecharacteristics of the VEPSO and VEDE algorithms byhybridizing these two classical vector evaluated solutionstrategies. When the VEDEPSO algorithm was comparedto Parsopoulos et. al.’s VEPSO and VEDE algorithms [9],[10], the VEDEPSO algorithm, on average, outperformed theVEDE and VEPSO algorithms.

Future research opportunities exist in comparing theVEDEPSO algorithm to more sophisticated multi-objectivesolution strategies and investigating VEDEPSO performanceon multi-objective problems with more than two objectivefunctions.

REFERENCES

[1] S. R. Abbass, H. A. and C. Newton, “PDE: A pareto-frontier differ-ential evolution approach for multi-objective optimization problems,”Proceedings of the IEEE Congress on Evolutionary Computation,pp. 971–978, 2001.

[2] A. P. Engelbrecht, Fundamentals of Computational Swarm Intelli-gence. Wiley, 2005.

[3] J. Grobler, Particle swarm optimization and differential evolutionfor multi-objective multiple machine scheduling. Masters thesis,University of Pretoria, 2008.

[4] Y. Jin, M. Olhofer, and B. Sendhoff, “Dynamic weighted aggregationfor evolutionary multi-objective optimization: why does it work andhow?,” Proceedings of the 2001 conference on genetic and evolution-ary computation, pp. 1042–1049, 2001.

[5] J. Kennedy and R. Eberhart, “Particle swarm optimization,” Proceed-ings of the IEEE International Confererence on Neural Networks,vol. 4, pp. 1942–1948, 1995.

[6] J. Kennedy, R. C. Eberhart, and Y. Shi, Swarm Intelligence. MorganKaufmann Publishers, 2001.

[7] J. D. Knowles, Local-search and hybrid evolutionary algorithms forpareto optimization. PhD thesis, University of Reading, 2002.

[8] N. K. Madavan, “Multi-objective optimization using a pareto dif-ferential evolution approach,” Proceedings of the IEEE Congress onEvolutionary Computation, pp. 1145–1150, 2002.

[9] K. E. Parsopoulos and M. N. Vrahatis, “Particle swarm optimizationmethod in multi-objective problems,” Proceedings of the 2002 ACMsymposium on applied computing, pp. 603–607, 2002.

[10] K. E. Parsopoulos, D. K. Tasoulis, N. G. Pavlidis, V. P. Plagianakos,and M. N. Vrahatis, “Vector evaluated differential evolution formulti-objective optimization,” Proceedings of the IEEE congress onevolutionary computation, pp. 204–211, 2004.

[11] R. L. Rardin, Optimization in Operations Research. Springer, 2004.[12] M. Reyes-Sierra and C. A. Coello Coello, “Multi-objective particle

swarm optimizers: a survey of the state-of-the-art,” InternationalJournal of Computational Intelligence Research, vol. 2, no. 3, pp. 287–308, 2006.

[13] Schaffer, Multiple objective optimization with vector evaluated GeneticAlgorithms. PhD thesis, Vanderbilt University, 1984.

[14] R. Storn and K. Price, “Differential evolution — a simple and efficientheuristic for global optimization over continuous spaces,” Journal ofGlobal Optimization, vol. 11, pp. 341–359, 1997.

[15] V. T’Kindt and J. C. Billaut, Multicriteria Scheduling (Theory, Modelsand Algorithms). Springer, 2002.

[16] F. Van den Bergh and A. P. Engelbrecht, “A new locally convergentparticle swarm optimiser,” Proceedings of the IEEE InternationalConference on Systems, Man and Cybernetics, vol. 3, pp. 6–12, 2002.

[17] F. Xue, A. C. Sanderson, and R. J. Graves, “Pareto-based multi-objective differential evolution,” Proceedings of the IEEE Congresson Evolutionary Computation, pp. 862–869, 2003.

[18] E. Zitzler, K. Deb, and L. Thiele, “Comparison of multi-objectiveevolutionary algorithms: empirical results,” Evolutionary computation,vol. 8, no. 2, pp. 173–195, 1999.

[19] E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G.da Fonseca, “Performance assessment of multi-objective optimizers: ananalysis and review,” IEEE Transactions on evolutionary computation,vol. 7, no. 2, pp. 117–132, 2003.

1260 2009 IEEE Congress on Evolutionary Computation (CEC 2009)

Page 7: [IEEE 2009 IEEE Congress on Evolutionary Computation (CEC) - Trondheim, Norway (2009.05.18-2009.05.21)] 2009 IEEE Congress on Evolutionary Computation - Hybridizing PSO and DE for

TABLE II

VEDEPSO PERFORMANCE RESULTS

Problem Metric VEPSO VEDE VEDEPSO(1) VEDEPSO(2) VEDEPSO(3) VEDEPSO(4)FFF 1(n = 30) S(PFPFPF ) μ 10.67 10.32 11.47 12.00 11.47 11.51

σ 0.23 0.26 0.27 0.20 0.24 0.27CI0.05 0.09 0.10 0.10 0.08 0.09 0.10

N(PFPFPF ) μ 109.67 19.50 137.73 125.57 139.13 132.73σ 54.01 3.28 26.31 12.82 16.48 19.70CI0.05 20.17 1.22 9.82 4.79 6.15 7.35

χ(PFPFPF ) μ 1.01 0.88 1.18 1.24 1.18 1.19σ 0.06 0.09 0.07 0.07 0.06 0.06CI0.05 0.02 0.03 0.03 0.02 0.02 0.02

FFF 1(n = 50) S(PFPFPF ) μ 10.33 10.04 10.93 11.45 10.94 10.99σ 0.20 0.22 0.20 0.20 0.19 0.23CI0.05 0.08 0.08 0.08 0.07 0.07 0.09

N(PFPFPF ) μ 116.50 18.20 138.70 113.73 126.87 127.23σ 62.11 3.31 24.07 11.02 22.88 20.84CI0.05 23.19 1.23 8.99 4.12 8.54 7.78

χ(PFPFPF ) μ 0.88 0.78 1.02 1.05 1.02 1.02σ 0.08 0.10 0.07 0.07 0.05 0.06CI0.05 0.03 0.04 0.03 0.03 0.02 0.02

FFF 1(n = 100) S(PFPFPF ) μ 9.88 9.76 10.31 10.78 10.35 10.30σ 0.17 0.10 0.15 0.14 0.19 0.19CI0.05 0.06 0.04 0.06 0.05 0.07 0.07

N(PFPFPF ) μ 83.00 17.20 115.57 91.63 119.57 106.23σ 45.29 2.85 22.78 9.75 20.23 19.83CI0.05 16.91 1.06 8.51 3.64 7.55 7.40

χ(PFPFPF ) μ 0.70 0.63 0.80 0.84 0.82 0.82σ 0.06 0.07 0.05 0.05 0.05 0.06CI0.05 0.02 0.03 0.02 0.02 0.02 0.02

FFF 2(n = 30) S(PFPFPF ) μ 10.85 11.17 11.29 10.60 11.26 10.70σ 0.27 0.37 0.40 0.18 0.36 0.23CI0.05 0.10 0.14 0.15 0.07 0.14 0.09

N(PFPFPF ) μ 16.43 4.90 14.93 10.87 17.43 18.30σ 7.35 2.50 6.34 3.92 5.10 5.34CI0.05 2.74 0.93 2.37 1.46 1.90 2.00

χ(PFPFPF ) μ 1.06 0.50 1.19 1.30 1.26 1.19σ 0.42 0.30 0.22 0.18 0.19 0.29CI0.05 0.16 0.11 0.08 0.07 0.07 0.11

FFF 2(n = 50) S(PFPFPF ) μ 11.57 11.29 11.59 11.51 11.60 11.59σ 0.13 0.20 0.12 0.08 0.16 0.15CI0.05 0.05 0.07 0.05 0.03 0.06 0.06

N(PFPFPF ) μ 58.07 15.37 65.07 32.90 67.93 65.07σ 30.07 2.57 14.89 5.04 12.12 14.12CI0.05 11.23 0.96 5.56 1.88 4.53 5.27

χ(PFPFPF ) μ 1.85 1.75 1.88 1.90 1.90 1.83σ 0.10 0.12 0.10 0.07 0.09 0.11CI0.05 0.04 0.05 0.04 0.03 0.04 0.04

FFF 2(n = 100) S(PFPFPF ) μ 11.30 11.05 11.36 11.32 11.40 11.36σ 0.22 0.21 0.10 0.06 0.09 0.13CI0.05 0.08 0.08 0.04 0.02 0.03 0.05

N(PFPFPF ) μ 66.57 15.90 77.77 43.53 80.30 84.10σ 19.65 3.35 22.16 6.96 21.40 16.61CI0.05 7.34 1.25 8.27 2.60 7.99 6.20

χ(PFPFPF ) μ 1.84 1.72 1.86 1.86 1.84 1.84σ 0.09 0.09 0.06 0.06 0.06 0.06CI0.05 0.03 0.03 0.02 0.02 0.02 0.02

FFF 3(n = 30) S(PFPFPF ) μ 10.85 11.17 11.29 10.60 11.26 10.70σ 0.27 0.37 0.40 0.18 0.36 0.23CI0.05 0.10 0.14 0.15 0.07 0.14 0.09

N(PFPFPF ) μ 16.43 4.90 14.93 10.87 17.43 18.30σ 7.35 2.50 6.34 3.92 5.10 5.34CI0.05 2.74 0.93 2.37 1.46 1.90 2.00

χ(PFPFPF ) μ 1.06 0.50 1.19 1.30 1.26 1.19σ 0.42 0.30 0.22 0.18 0.19 0.29CI0.05 0.16 0.11 0.08 0.07 0.07 0.11

FFF 3(n = 50) S(PFPFPF ) μ 10.44 10.59 10.69 10.27 10.68 10.38σ 0.24 0.32 0.30 0.13 0.24 0.12CI0.05 0.09 0.12 0.11 0.05 0.09 0.05

N(PFPFPF ) μ 16.43 5.07 18.20 11.50 18.17 18.47σ 5.40 1.80 4.99 3.29 5.62 4.66CI0.05 2.02 0.67 1.86 1.23 2.10 1.74

χ(PFPFPF ) μ 1.02 0.59 1.17 1.21 1.03 1.14σ 0.32 0.33 0.23 0.17 0.24 0.26CI0.05 0.12 0.12 0.09 0.06 0.09 0.10

2009 IEEE Congress on Evolutionary Computation (CEC 2009) 1261

Page 8: [IEEE 2009 IEEE Congress on Evolutionary Computation (CEC) - Trondheim, Norway (2009.05.18-2009.05.21)] 2009 IEEE Congress on Evolutionary Computation - Hybridizing PSO and DE for

TABLE III

VEDEPSO PERFORMANCE RESULTS

Problem Metric VEPSO VEDE VEDEPSO(1) VEDEPSO(2) VEDEPSO(3) VEDEPSO(4)FFF 3(n = 100) S(PFPFPF ) μ 10.11 10.12 10.22 10.08 10.18 10.11

σ 0.16 0.27 0.14 0.11 0.15 0.10CI0.05 0.06 0.10 0.05 0.04 0.06 0.04

N(PFPFPF ) μ 18.73 6.30 19.27 10.97 20.97 20.03σ 7.89 2.51 6.16 3.36 6.21 5.31CI0.05 2.94 0.94 2.30 1.25 2.32 1.98

χ(PFPFPF ) μ 1.07 0.80 1.00 1.06 1.04 1.00σ 0.24 0.37 0.28 0.17 0.23 0.19CI0.05 0.09 0.14 0.10 0.06 0.09 0.07

FFF 4(n = 30) S(PFPFPF ) μ 12.90 12.56 12.90 12.78 12.93 12.82σ 0.12 0.32 0.16 0.06 0.15 0.12CI0.05 0.04 0.12 0.06 0.02 0.06 0.05

N(PFPFPF ) μ 61.27 16.03 65.50 39.07 73.47 73.40σ 21.45 3.18 8.58 6.98 14.36 15.52CI0.05 8.01 1.19 3.20 2.61 5.36 5.79

χ(PFPFPF ) μ 2.03 1.77 2.11 2.10 2.11 2.13σ 0.10 0.14 0.07 0.07 0.07 0.09CI0.05 0.04 0.05 0.03 0.03 0.03 0.04

FFF 4(n = 50) S(PFPFPF ) μ 12.70 12.43 12.70 12.65 12.72 12.75σ 0.10 0.16 0.09 0.05 0.09 0.10CI0.05 0.04 0.06 0.03 0.02 0.03 0.04

N(PFPFPF ) μ 77.07 17.47 83.57 52.20 89.73 97.13σ 30.41 3.00 18.16 6.84 20.73 21.40CI0.05 11.35 1.12 6.78 2.55 7.74 7.99

χ(PFPFPF ) μ 1.99 1.80 2.12 2.11 2.10 2.05σ 0.11 0.12 0.08 0.06 0.08 0.07CI0.05 0.04 0.04 0.03 0.02 0.03 0.03

FFF 4(n = 100) S(PFPFPF ) μ 12.51 12.32 12.58 12.55 12.59 12.57σ 0.24 0.21 0.07 0.04 0.08 0.07CI0.05 0.09 0.08 0.03 0.02 0.03 0.03

N(PFPFPF ) μ 104.03 18.37 116.20 71.00 112.90 121.57σ 32.18 3.06 25.22 9.84 25.51 29.23CI0.05 12.01 1.14 9.42 3.68 9.52 10.91

χ(PFPFPF ) μ 1.95 1.81 2.10 2.08 2.08 2.08σ 0.15 0.13 0.06 0.05 0.07 0.06CI0.05 0.06 0.05 0.02 0.02 0.03 0.02

FFF 5(n = 30) S(PFPFPF ) μ 12.07 11.99 12.03 11.92 12.03 11.95σ 0.18 0.39 0.27 0.09 0.21 0.22CI0.05 0.07 0.14 0.10 0.03 0.08 0.08

N(PFPFPF ) μ 56.10 15.00 58.97 28.57 57.17 56.33σ 24.14 2.48 14.67 6.15 13.31 14.80CI0.05 9.01 0.93 5.48 2.29 4.97 5.53

χ(PFPFPF ) μ 2.04 1.85 2.04 2.04 2.03 2.05σ 0.12 0.12 0.10 0.10 0.08 0.09CI0.05 0.05 0.04 0.04 0.04 0.03 0.03

FFF 5(n = 50) S(PFPFPF ) μ 11.85 11.57 11.79 11.77 11.85 11.75σ 0.16 0.15 0.14 0.09 0.16 0.13CI0.05 0.06 0.06 0.05 0.04 0.06 0.05

N(PFPFPF ) μ 62.67 16.86 67.33 35.00 71.00 67.60σ 30.29 2.73 13.12 6.64 15.59 13.67CI0.05 11.31 1.02 4.90 2.48 5.82 5.10

χ(PFPFPF ) μ 2.02 1.81 2.00 2.02 2.03 2.01σ 0.10 0.13 0.07 0.07 0.09 0.08CI0.05 0.04 0.05 0.03 0.02 0.04 0.03

FFF 5(n = 100) S(PFPFPF ) μ 11.64 11.33 11.64 11.62 11.66 11.60σ 0.15 0.19 0.08 0.06 0.12 0.09CI0.05 0.06 0.07 0.03 0.02 0.04 0.03

N(PFPFPF ) μ 93.70 14.97 78.63 47.67 83.00 85.70σ 42.00 1.88 16.43 7.95 14.29 14.91CI0.05 15.68 0.70 6.14 2.97 5.34 5.57

χ(PFPFPF ) μ 2.00 1.85 2.00 1.99 1.99 2.02σ 0.08 0.08 0.07 0.04 0.08 0.06CI0.05 0.03 0.03 0.03 0.02 0.03 0.02

1262 2009 IEEE Congress on Evolutionary Computation (CEC 2009)