17
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 2, APRIL 2009 243 Self-Adaptive Multimethod Search for Global Optimization in Real-Parameter Spaces Jasper A. Vrugt, Bruce A. Robinson, and James M. Hyman Abstract—Many different algorithms have been developed in the last few decades for solving complex real-world search and opti- mization problems. The main focus in this research has been on the development of a single universal genetic operator for popula- tion evolution that is always efficient for a diverse set of optimiza- tion problems. In this paper, we argue that significant advances to the field of evolutionary computation can be made if we embrace a concept of self-adaptive multimethod optimization in which mul- tiple different search algorithms are run concurrently, and learn from each other through information exchange using a common population of points. We present an evolutionary algorithm, en- titled A Multialgorithm Genetically Adaptive Method for Single Objective Optimization (AMALGAM-SO), that implements this concept of self adaptive multimethod search. This method simul- taneously merges the strengths of the covariance matrix adapta- tion (CMA) evolution strategy, genetic algorithm (GA), and par- ticle swarm optimizer (PSO) for population evolution and imple- ments a self-adaptive learning strategy to automatically tune the number of offspring these three individual algorithms are allowed to contribute during each generation. Benchmark results in 10, 30, and 50 dimensions using synthetic functions from the special session on real-parameter optimization of CEC 2005 show that AMALGAM-SO obtains similar efficiencies as existing algorithms on relatively simple unimodal problems, but is superior for more complex higher dimensional multimodal optimization problems. The new search method scales well with increasing number of di- mensions, converges in the close proximity of the global minimum for functions with noise induced multimodality, and is designed to take full advantage of the power of distributed computer networks. Index Terms—Adaptive estimation, elitism, genetic algorithms, nonlinear estimation, optimization. I. INTRODUCTION M ANY real-world search and optimization problems re- quire the estimation of a set of model parameters or state variables that provide the best possible solution to a predefined cost or objective function, or a set of optimal tradeoff values in the case of two or more conflicting objectives. Locating global optimal solutions is often painstakingly tedious, especially in the presence of high dimensionality, nonlinear parameter inter- action, insensitivity, and multimodality of the objective func- tion. These conditions make it very difficult for any search al- gorithm to find high-quality solutions quickly without getting Manuscript received September 26, 2007; revised December 08, 2007, Feb- ruary 01, 2008, and February 27, 2008. First published August 08, 2008; current version published April 01, 2009. The work of J. A. Vrugt was supported by a J. Robert Oppenheimer Fellowship from the Los Alamos National Laboratory Postdoctoral Program. The authors are with the Center for Nonlinear Studies (CNLS), Los Alamos National Laboratory (LANL), Los Alamos, NM 87545 USA (e-mail: [email protected]; [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TEVC.2008.924428 stuck in local basins of attraction when traversing the search space en route to the global optimum. Unfortunately, these dif- ficulties are frequently encountered in real-world search and op- timization problems. In this paper, we consider single-objective optimization problems with decision variables (parameters), and in which the parameter search space , although perhaps quite large, is bounded. We denote as the de- cision vector, and as the associated objective function for a given function or model . Throughout this paper, we focus on minimization problems (1) In the last few decades, many different algorithms have been solved to find the minimum of the objective function in (1). Of these, evolutionary algorithms have emerged as a revolu- tionary approach for solving complex search and optimization problems. These methods are heuristic search algorithms and implement analogies to physics and biology to evolve a popu- lation of potential solutions through the parameter space to the global minimum. Beyond their ability to search enormously large spaces, these algorithms possess the ability to maintain a diverse set of solutions and exploit similarities of solutions by recombination. In this context, four different approaches have found widespread use: i) self-adaptive evolution strategies [3], [19], [34]; ii) real-parameter genetic algorithms [10], [11], [24]; iii) differential evolution methods [37]; and iv) particle swarm optimization (PSO) algorithms [13], [26]. These algorithms share a number of common elements, and show similarities in search principles on certain fitness landscapes [6], [27]. Despite this progress made, the current generation of op- timization algorithms usually implements a single genetic operator for population evolution. For example, the majority of papers published in the proceedings of the recent special session on real-parameter optimization at CEC-2005, Edin- burgh, U.K., describe methods of optimization that utilize only a single operator for population evolution. Exceptions include contributions that use simple self-adaptive [43] or memetic al- gorithms combing global and local search in an iterative fashion [29], [31]. Reliance on a single biological model of natural selection and adaptation presumes that a single method exists that can efficiently evolve a population of potential solutions through the parameter space and work well for a diverse set of problems. However, existing theory and numerical benchmark experiments have demonstrated that it is impossible to develop a single, universal algorithm for population evolution that is always efficient for a diverse set of optimization problems [42]. This is because the nature of the fitness landscape (objective function mapped out as function of ) can vary considerably between different optimization problems, and perhaps more 1089-778X/$25.00 © 2008 IEEE Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, …The ISSOP software package uses several optimization strategies in parallel within a neural learn and adaption system. This work has

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

  • IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 2, APRIL 2009 243

    Self-Adaptive Multimethod Search for GlobalOptimization in Real-Parameter Spaces

    Jasper A. Vrugt, Bruce A. Robinson, and James M. Hyman

    Abstract—Many different algorithms have been developed in thelast few decades for solving complex real-world search and opti-mization problems. The main focus in this research has been onthe development of a single universal genetic operator for popula-tion evolution that is always efficient for a diverse set of optimiza-tion problems. In this paper, we argue that significant advances tothe field of evolutionary computation can be made if we embracea concept of self-adaptive multimethod optimization in which mul-tiple different search algorithms are run concurrently, and learnfrom each other through information exchange using a commonpopulation of points. We present an evolutionary algorithm, en-titled A Multialgorithm Genetically Adaptive Method for SingleObjective Optimization (AMALGAM-SO), that implements thisconcept of self adaptive multimethod search. This method simul-taneously merges the strengths of the covariance matrix adapta-tion (CMA) evolution strategy, genetic algorithm (GA), and par-ticle swarm optimizer (PSO) for population evolution and imple-ments a self-adaptive learning strategy to automatically tune thenumber of offspring these three individual algorithms are allowedto contribute during each generation. Benchmark results in 10,30, and 50 dimensions using synthetic functions from the specialsession on real-parameter optimization of CEC 2005 show thatAMALGAM-SO obtains similar efficiencies as existing algorithmson relatively simple unimodal problems, but is superior for morecomplex higher dimensional multimodal optimization problems.The new search method scales well with increasing number of di-mensions, converges in the close proximity of the global minimumfor functions with noise induced multimodality, and is designed totake full advantage of the power of distributed computer networks.

    Index Terms—Adaptive estimation, elitism, genetic algorithms,nonlinear estimation, optimization.

    I. INTRODUCTION

    M ANY real-world search and optimization problems re-quire the estimation of a set of model parameters or statevariables that provide the best possible solution to a predefinedcost or objective function, or a set of optimal tradeoff values inthe case of two or more conflicting objectives. Locating globaloptimal solutions is often painstakingly tedious, especially inthe presence of high dimensionality, nonlinear parameter inter-action, insensitivity, and multimodality of the objective func-tion. These conditions make it very difficult for any search al-gorithm to find high-quality solutions quickly without getting

    Manuscript received September 26, 2007; revised December 08, 2007, Feb-ruary 01, 2008, and February 27, 2008. First published August 08, 2008; currentversion published April 01, 2009. The work of J. A. Vrugt was supported by aJ. Robert Oppenheimer Fellowship from the Los Alamos National LaboratoryPostdoctoral Program.

    The authors are with the Center for Nonlinear Studies (CNLS), LosAlamos National Laboratory (LANL), Los Alamos, NM 87545 USA (e-mail:[email protected]; [email protected]; [email protected]).

    Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

    Digital Object Identifier 10.1109/TEVC.2008.924428

    stuck in local basins of attraction when traversing the searchspace en route to the global optimum. Unfortunately, these dif-ficulties are frequently encountered in real-world search and op-timization problems. In this paper, we consider single-objectiveoptimization problems with decision variables (parameters),and in which the parameter search space , although perhapsquite large, is bounded. We denote as the de-cision vector, and as the associated objective functionfor a given function or model . Throughout this paper, we focuson minimization problems

    (1)

    In the last few decades, many different algorithms have beensolved to find the minimum of the objective function in (1).Of these, evolutionary algorithms have emerged as a revolu-tionary approach for solving complex search and optimizationproblems. These methods are heuristic search algorithms andimplement analogies to physics and biology to evolve a popu-lation of potential solutions through the parameter space to theglobal minimum. Beyond their ability to search enormouslylarge spaces, these algorithms possess the ability to maintain adiverse set of solutions and exploit similarities of solutions byrecombination. In this context, four different approaches havefound widespread use: i) self-adaptive evolution strategies [3],[19], [34]; ii) real-parameter genetic algorithms [10], [11], [24];iii) differential evolution methods [37]; and iv) particle swarmoptimization (PSO) algorithms [13], [26]. These algorithmsshare a number of common elements, and show similarities insearch principles on certain fitness landscapes [6], [27].

    Despite this progress made, the current generation of op-timization algorithms usually implements a single geneticoperator for population evolution. For example, the majorityof papers published in the proceedings of the recent specialsession on real-parameter optimization at CEC-2005, Edin-burgh, U.K., describe methods of optimization that utilize onlya single operator for population evolution. Exceptions includecontributions that use simple self-adaptive [43] or memetic al-gorithms combing global and local search in an iterative fashion[29], [31]. Reliance on a single biological model of naturalselection and adaptation presumes that a single method existsthat can efficiently evolve a population of potential solutionsthrough the parameter space and work well for a diverse set ofproblems. However, existing theory and numerical benchmarkexperiments have demonstrated that it is impossible to developa single, universal algorithm for population evolution that isalways efficient for a diverse set of optimization problems [42].This is because the nature of the fitness landscape (objectivefunction mapped out as function of ) can vary considerablybetween different optimization problems, and perhaps more

    1089-778X/$25.00 © 2008 IEEE

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

  • 244 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 2, APRIL 2009

    importantly, can change shape en route to the global optimalsolution.

    In a recent paper [41], we have introduced a new concept ofself-adaptive multimethod evolutionary search. This approachentitled, A Multialgorithm Genetically Adaptive Multiobjective(AMALGAM), method, employs a diverse set of optimizationalgorithms simultaneously for population evolution, adaptivelyfavoring individual algorithms that exhibit the highest repro-ductive success during the search. By adaptively changingpreference to individual search algorithms during the course ofthe optimization, AMALGAM has the ability to quickly adaptto the specific difficulties and peculiarities of the optimizationproblem at hand. Synthetic multiobjective benchmark studiescovering a diverse set of problem features have demonstratedthat AMALGAM significantly improves the efficiency of evo-lutionary search, approaching a factor of 10 improvement overother available methods.

    In this paper, we extend the principles and ideas underlyingAMALGAM to single objective real-parameter optimization.We present an evolutionary search algorithm, called A Multi-algorithm Genetically Adaptive Method for Single ObjectiveOptimization (AMALGAM-SO), which simultaneously utilizesthe strengths of various commonly used algorithms for popula-tion evolution, and implements a self-adaptive learning strategyto favor individual algorithms that demonstrate the highest re-productive success during the search. In this paper, we considerthe Covariance Matrix Adaptation (CMA) evolutionary strategy,Genetic Algorithm (GA), Particle Swarm Optimizer (PSO), Dif-ferential Evolution (DE), and Parental-Centric RecombinationOperator (PCX) for population evolution. To test the numer-ical performance of AMALGAM-SO, a wide range of exper-iments are conducted using a selected set of standard test func-tions from the special session on real-parameter optimization ofthe IEEE Congress on Evolutionary Computations, CEC 2005.A comparison analysis against state-of-the-art single search op-erator, memetic, and multimethod algorithms is also included.

    The remainder of this paper is organized as follows. Section IIprovides the rationale for simultaneous multimethod searchwith genetically or self-adaptive updating, and presents a de-tailed algorithmic description of the AMALGAM-SO method.Section III presents a new species selection mechanism tomaintain useful population diversity during the search. Afterdiscussing the stopping criteria of the AMALGAM-SO algo-rithm in Section IV, the experimental procedure used to test theperformance of this new method is presented in Section V. InSection VI, simulation results of AMALGAM-SO are presentedin dimensions 10, 30, and 50 using the test suite of benchmarkfunctions from CEC 2005. Here we focus specifically on whichcombination of algorithms to include in AMALGAM-SO. Aperformance comparison with a large number of other evolu-tionary algorithms is also included in this section, includingan analysis of the scaling behavior of AMALGAM-SO, and acase study illustrating the performance of the method on a testfunction with noise induced multimodality. Finally, a summarywith conclusions is presented in Section VII.

    II. TOWARDS A SELF-ADAPTIVE MULTIMETHODSEARCH CONCEPT

    Much research in the optimization literature has focusedon the development of a universal operator for population

    diversity that is always efficient for a large range of problems.Unfortunately, the No Free Lunch Theorem (NFL) of [42]and the outcome of many different performance studies ([22]amongst many others) have unambiguously demonstrated thatit is impossible to develop a single search operator that isalways most efficient on a large range of problems. The reasonfor this is that different fitness landscapes require differentsearch approaches. In this paper, we embrace a concept ofself-adaptive multimethod optimization, in which the goalis to develop a combination of search algorithms that havecomplementary properties and learn from each other througha common population of points to efficiently handle a widevariety of response surfaces. We will show that NFL also holdsfor this approach, but that self-adaptation has clear advantagesover other search approaches when confronted with complexmultimodal and noisy optimization problems.

    The use of multiple methods for population evolution hasbeen studied before. For instance, memetic algorithms (alsocalled hybrid genetic algorithms) have been proposed to in-crease the search efficiency of population based optimizationalgorithms [23]. These methods are inspired by models ofadaptation in natural systems, and typically use a geneticalgorithm for global exploration of the search space (althoughrecently also PSO algorithms are used: [29], combined with alocal search heuristic for exploitation. Memetic algorithms doimplement multimethod search but do not genetically updatepreference to individual algorithms during the search, which isa potential source of inefficiency of the method. Closer to theframework we propose in this paper, is the self-adaptive DE(SaDE) algorithm developed by [43]. This method implementstwo different DE learning strategies and updates their weightin the search based on their previous success rate. Unfortu-nately, initial results of SaDE are not convincing, receiving asimilar performance on the CEC-2005 test suite of functions asmany other evolutionary approaches that do not implement aself-adaptive learning strategy.

    Other approaches that use ideas of multimethod search are theagent based memetic algorithm proposed by [5] and the ISSOPcommercial software package developed by Dualis GmbH ITSolution. The work presented in [5] considers an agent basedmemetic algorithm. In a lattice-like environment, each of theagents represents a candidate solution of the problem. Theagents cooperate and compete through neighborhood orthog-onal crossover and different types of life span learning to solvea constrained optimization problem with a suitable constrainthandling technique. This approach is quite different from whatwe will present in this paper. The ISSOP software packageuses several optimization strategies in parallel within a neurallearn and adaption system. This work has been patented andunfortunately, it is difficult to find detailed information abouthow ISSOP exactly works, and whether it performs self-adap-tation at all. Moreover, we do not have access to the sourcecode severely complicating comparison against our method.

    In this paper, we propose a hybrid optimization frameworkthat implements the ideas of simultaneous multimethod searchwith genetically adaptive (or self-adaptive) offspring creation toconduct an efficient search of the space of potential solutions.Our framework considers multiple different evolutionary searchstrategies simultaneously, implements a restart strategy with in-

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

  • VRUGT et al.: SELF-ADAPTIVE MULTIMETHOD SEARCH FOR GLOBAL OPTIMIZATION IN REAL-PARAMETER SPACES 245

    Fig. 1. Flowchart of the AMALGAM-SO optimization method. The varioussymbols are explained in the text.

    creasing population size, and uses self-adaptation to favor in-dividual search operators that exhibit the highest reproductivesuccess. Moreover, we consider both stochastic and determin-istic algorithms to reduce the risk of becoming stuck in a singleregion of attraction.

    If we adopt this concept of genetically adaptive multimethodsearch, than we need to address a number of questions that natu-rally develop when using different strategies simultaneously forpopulation evolution. First, which algorithms should be used inthe search? Preferably, the chosen set of algorithms should bemutually consistent and complementary so that an optimal syn-ergy will develop between the different search operators with anefficient optimization as result. Second, how do we enable com-munication between individual algorithms, so that they can effi-ciently learn from each other? Communication facilitates infor-mation exchange and is the key to improving efficiency. Finally,

    Fig. 2. Pseudo-code of initial sampling strategy. This method creates an initialpopulation, � in which the individuals exhibit a maximum Euclidean distancefrom each other. Calculations reported in this paper use a sample size of � of10 000 individuals. A good choice for the population size, � depends on thedimensionality of the problem. Recommendations are made in Section V.

    how do we determine the weight of each individual algorithmin the search? Hence, if efficiency and robustness are the maingoals then better performing search methods should be able tocontribute more offspring during population evolution. The nextfew sections will address each of these issues.

    A. Multimethod Search: Basic Algorithm Description

    The multimethod algorithm developed in the present studyemploys a population-based elitism search strategy to find theglobal optimum of a predefined objective function. The methodis entitled, A Multi Algorithm Genetically Adaptive Method forSingle Objective Optimization or AMALGAM-SO to evoke theimage of a procedure that blends the best attributes of individualsearch algorithms. The basic AMALGAM-SO algorithm is pre-sented in Fig. 1, and is described below. The source code ofAMALGAM-SO is written in MATLAB and can be obtainedfrom the first author upon request.

    The algorithm is initiated using a random initial populationof size generated by implementing a maximum Euclidean

    distance Latin hypercube sampling (LHS) method. This sam-pling method is described in Fig. 2, and strategically places asample of points within a predefined search space. Then, thefitness value (objective function) of each point is computed,and a population of offspring of size , is generated byusing the multimethod search concept that lies at the heart ofthe AMALGAM-SO method. Instead of using a single operatorfor reproduction, we simultaneously utilize individual algo-rithms to generate the offspring . Thesealgorithms each create a prespecified number of offspringpoints from using different adaptiveprocedures, where denotes generation number. In practice,this means that each individual search operator has access to

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

  • 246 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 2, APRIL 2009

    Fig. 3. Schematic overview of the adaptive learning strategy of �� � � � � � � � in AMALGAM-SO for an illustrative example with three different search operators:the CMA evolutionary strategy, GA en PSO algorithms. The various panels depict (a) � : the evolution of the minimum objective function value, (b) ��� �:function value improvement—the difference between two subsequent values of � , and (c) which search operator in AMALGAM-SO has caused what respectivefitness improvement. The information contained in panels (b) and (c) is post-processed after termination of each optimization run with AMALGAM-SO and usedto update �� � � � � � � � for the next restart run.

    exactly the same genetic material contained in the individuals ofthe population, but likely produces different offspring becauseof different approaches for evolution. After creation of theoffspring, the fitness value is evaluated, and a combined popu-lation of size is created. Then, is sorted inorder of decreasing fitness value. Finally, members for the nextgeneration are chosen using the new species selectionmechanism described in Section III. Elitism is ensured becausethe best solution found so far will always be included in . Thisapproach continues until one of the stopping criteria is satisfied,at which time the search with AMALGAM-SO is terminatedand the method is restarted with an increased population size.Empirical investigations using a selected set of standard testfunctions from CEC 2005 demonstrated that a populationdoubling exhibits the best performance in terms of efficiencyand robustness. Conditions for restarting the algorithm will bediscussed in a later section. Note that AMALGAM-SO is setupin such a way that it can easily be parallelized on a distributedcomputer network, so that the method can be used to calibratecomplex models that require significant time to run.

    Before collecting any information about the response sur-face, AMALGAM-SO algorithm starts out in the first run withuser-defined values of . These values are subse-quently updated to favor individual algorithms that exhibit thehighest reproductive success. In principle, various approachesto updating could be employed basedon different measures of success of individual algorithms. Forexample, one could think of criteria that directly measure fit-ness value improvement (exploitation), and diversity (explo-ration), or combinations thereof. Moreover, we could update

    after each generation within an optimiza-tion run, or only once before each restart run, post-processing

    the information collected previously. The approach developedhere is to update using information fromthe previous run. The way this is done is summarized in Fig. 3,for an illustrative example with three different operators for pop-ulation evolution in our multimethod optimization concept.

    The first panel depicts the evolution of the best fitness value,, as function of the number of generations with

    AMALGAM-SO. The curve shows a rapid initial decline, andstabilizes around 300 generations demonstrating approximateconvergence to a region with highest probability. The middlepanel depicts the absolute function value improvement (differ-ence between two subsequent objective function values pre-sented in the top panel), while the bottom panel illustrates whichof the three algorithms in our multimethod search caused thevarious fitness improvements. From this example, it is obviousthat the PSO and CMA algorithm have contributed most to min-imization of the objective function, and hence the next restartrun should be able to contribute more offspring during popula-tion evolution. The information contained in Fig. 3, is storedin memory, and postprocessed after termination of each runwith AMALGAM-SO. A two-step procedure is used to update

    . In the first step, we compute how much each in-dividual algorithm contributed to the overall improvement in ob-jective function. We denote this as , within this illustrative example. Then, we update ac-cording to

    (2)

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

  • VRUGT et al.: SELF-ADAPTIVE MULTIMETHOD SEARCH FOR GLOBAL OPTIMIZATION IN REAL-PARAMETER SPACES 247

    The learning mechanism in (2) ensures that the most productivesearch algorithms in AMALGAM-SO are rewarded in the nextrestart run by allowing them to generate more offspring. Thismethod should result in the fastest possible decline of the objec-tive function, as algorithms that contribute most in the reductionof the objective function will receive more emphasis during pop-ulation evolution. Potentially, there are other metrics, howeverthat could be used to determine the success and thus weightingof the individual search methods within AMALGAM-SO. How-ever, our simulation results using the CEC-2005 testbed of prob-lems, demonstrates that the suggested approach works well fora range of problems.

    To avoid the possibility of inactivating algorithms that maycontribute to convergence in future generations, minimumvalues for are assigned. These minimum valuesdepend on the particular algorithm and population size used,and will be discussed later. To further increase computa-tional efficiency, AMALGAM-SO explicitly includes the bestmember found in the previous run in the population of the nextrestart run, replacing one random member of the initial LHSsample. However, to avoid possible stagnation of the search,this approach is taken only if is at least (in rel-ative terms) better than the minimum function value obtainedfrom the previous run.

    B. Multimethod Search: Which Search Algorithms to Include?

    In principle, the AMALGAM-SO method is very flexibleand can accommodate any biological or physical model forpopulation evolution. This implies that the number of search al-gorithms that potentially could be included in AMALGAM-SOis very large. For illustrative purposes, we here consideronly the most popular and commonly used, general-pur-pose, evolutionary optimization algorithms. These methodsare: 1) the covariance matrix adaptation (CMA) evolutionstrategy; 2) genetic algorithm (GA); 3) parental-centric re-combination operator (PCX); 4) particle swarm optimization(PSO); and 5) differential evolution (DE). However, note thatthe AMALGAM-SO method is not limited to these searchstrategies. Search approaches that seek iterative improve-ment from a single starting point in the search domain, suchas quasi-Newton and Simplex search can also be used. Thesemethods are frequently utilized in memetic algorithms, but havethe disadvantage of requiring sequential evaluations of the ob-jective function, complicating implementation and efficiency ofAMALGAM-SO on distributed computer networks. Moreover,these search techniques are more likely to get stuck in a localsolution, if the objective function contains a large number oflocal optima. In the following five sections, we briefly describeeach of the individual evolutionary search strategies used aspotential candidates in AMALGAM-SO. Detailed descriptionsof these algorithms, including a discussion of their algorithmicparameters, can be found in the cited references. Note that wemade no attempt to optimize the algorithmic parameters ineach of these individual search methods: the values describedin the cited references are used in AMALGAM-SO withoutadjustment.

    1) CMA Evolution Strategy (ES): The CMA is an evolutionstrategy that adapts a full covariance matrix of a normal search(mutation) distribution [18]–[20]. The CMA evolution strategy

    begins with a user-specified initial population of individuals. After evaluating the objective function, the best

    individuals are selected as parental vectors, and their centerof mass is computed using a prespecified weighting scheme:

    , where the weights arepositive, and sum to one. In the literature, several differentweighting schemes have been proposed for recombination. Forinstance, one approach is to assign equal weighting to each ofthe CMA selected individuals, irrespective of their fitness. Thisis termed intermediate recombination. Other schemes imple-ment a weighted recombination method to favor individualswith a higher fitness. After selection and recombination, anupdated population is created according to

    (3)

    where are independent realizations of an -di-mensional standard normal distribution with zero-mean and acovariance matrix equal to the identity matrix . These basepoints are rotated and scaled by the eigenvectors and thesquare root of the eigenvalues of the covariance matrix

    . The covariance matrix, and the global step-size,are continuously updated after each generation. This approachresults in a strategy that is invariant against any linear transfor-mation of the search space. Equations for initializing and up-dating the strategy parameters are given in [21]. On convex-quadratic functions, the adaptation mechanism for andallow log-linear convergence to be achieved after an adaptationtime which can scale between 0 and .

    The default strategy parameters are given in [2] and [21].Only and have to be derived independently of theproblem. In all calculations presented herein, a weighted re-combination scheme is used: ,where is the index of order of the best indi-viduals per generation, sorted in ascending order with respectto their fitness. In addition, is derived from the initialLatin hypercube sample, and the initial-step size, is set to

    , where and represent theupper and lower boundaries of the feasible search space.

    2) Genetic Algorithm (GA): Genetic algorithms (GAs)were formally introduced in the 1970s by John Holland at theUniversity of Michigan. Since its introduction, the method hasfound widespread implementation and use, and has success-fully solved optimization and search problems in many fieldsof study. The GA is an adaptive heuristic search algorithmthat mimics the principles laid down by Charles Darwin in hisevolution theory and successively applies genetic operatorssuch as selection, crossover (recombination) and mutation toiteratively improve the fitness of a population of points and findthe global optimum in the search space. While the crossoveroperation leads to a mixing of genetic material in the offspring,no new allelic material is introduced. This can lead to lack ofpopulation diversity and eventually “stagnation” in which thepopulation converges to the same, non-optimal solution. TheGA mutation operator helps to increase population diversity byintroducing new genetic material.

    In all GA experiments reported in this paper, a tournament se-lection scheme is used, with uniform crossover and polynomialmutation. Tournament selection involves picking a number of

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

  • 248 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 2, APRIL 2009

    strings at random from the population to form a “tournament”pool. The two strings of highest fitness are then selected as par-ents from this tournament pool. Uniform crossover creates arandom binary vector and selects the genes where the vector is a1 from the first parent, and the genes where the vector is a 0 fromthe second parent, to generate offspring. To increase populationdiversity, and to enable the search to escape from local optimalsolutions, the method of polynomial mutation, described in de-tail in [9], is implemented, with default values for crossover ,and mutation probability, of 0.90 and , respectively usinga value of the mutation distribution index of [9].

    3) Parental-Centric Recombination Operator (PCX): ThePCX operator has recently been proposed by [11] as an effi-cient recombination operator for creating offspring from an ex-isting population of points within the context of GAs. System-atic studies have shown that this parent centric recombination isa good alternative to other commonly used crossover operators,and provides a meaningful and efficient way to solving real-pa-rameter optimization problems.

    The method starts by randomly initializing a population ofpoints using Latin hypercube sampling. After this, the fitness ofeach individual is calculated and parents are randomly se-lected from the existing population of points. The mean vector

    of these parents is computed, and one parent from this setis chosen with equal probability. The direction vector of

    is then computed to measure the distance and anglebetween the selected parent and . From the remainingparents, perpendicular distances to the line are com-puted, and their average is calculated. The offspring iscreated as follows:

    (4)

    where denote the orthonormal bases that spanthe subspace perpendicular to . The parameters andare zero-mean, normally distributed variables with varianceand , respectively.

    In contrast to other crossover operators, the PCX operatorassigns a higher probability for an offspring to remain closerto the parents than away from the parents. This should result inan efficient and reliable search strategy, especially in the caseof continuous optimization problems. In all our PCX runs, wehave used the default values , and ,

    as recommended in [11], and is set to be the best parent inthe population.

    4) Particle Swarm Optimization (PSO): PSO is a popula-tion-based stochastic optimization method whose developmentwas inspired from the flocking and swarm behavior of birds andinsects. After its introduction in 1995 [13], [26], the method hasgained rapid popularity in many fields of study. The methodworks with a group of potential solutions, called particles, andsearches for optimal solutions by continuously modifying thispopulation in subsequent generations. To start, the particles areassigned a random location and velocity in the -dimensionalsearch space. After initialization, each particle iteratively ad-justs its position according to its own flying experience, and ac-cording to the flying experience of all other particles, making

    use of the best position encountered by itself, , and theentire population, . In contrast to the other search algo-rithms included in AMALGAM-SO, the PSO algorithm com-bines principles from local and global search to evolve a popula-tion of points toward the global optimal solution. The reproduc-tive operator for creating offspring from an existing populationis [13], [26]

    (5)

    where and represent the current velocity and location ofa particle, is the inertia factor, and are weights reflectingthe cognitive and social factors of the particle, respectively, and

    and are uniform random numbers between 0 and 1. Basedon recommendations in various previous studies, the values for

    and were set to 2, and the inertia weight was linearly de-creased between 0.9 at the first generation and 0.4 at the end ofthe simulation. To limit the jump size, the maximum allowablevelocity was set to be 15 of the longest axis-parallel interval inthe search space [40]. In addition, we implement a circle neigh-borhood topology approach and replace in (5) with thebest individual found within the closest 10% of each respectiveparticle as measured in the Euclidean space.

    5) Differential Evolution (DE): While traditional evolu-tionary algorithms are well suited to solve many difficultoptimization problems, interactions among decision variables(parameters) introduces another level of difficulty in the evo-lution. Previous work has demonstrated the poor performanceof a number of evolutionary optimization algorithms in findinghigh quality solutions for rotated problems exhibiting stronginterdependencies between parameters [37]. Rotated problemstypically require correlated, self-adapting mutation step sizesin order to make timely progress in the optimization.

    DE has been demonstrated to be effective in dealing withstrong correlation among decision variables, and, like theCMA-ES, exhibits rotationally invariant behavior [37]. DE isa population-based search algorithm that uses fixed multiplesof differences between solution vectors of the existing popu-lation to create children. In this study, the version known asDE/rand/1/bin or “classic DE” [37], [38] is used to generateoffspring

    (6)

    where is a mutation scaling factor that controls the levelof combination between individual solutions, and , , and

    are randomly selected indices from the existing population; . Next, one or more parameter

    values of this mutant vector are uniformly crossed withthose belonging to to generate the offspring

    if

    otherwise(7)

    where is a uniform random label between and de-notes the crossover constant that controls the fraction of parame-

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

  • VRUGT et al.: SELF-ADAPTIVE MULTIMETHOD SEARCH FOR GLOBAL OPTIMIZATION IN REAL-PARAMETER SPACES 249

    ters that the mutant vector contributes to the offspring. In all DEruns in this study, default values of and areused [33]. Note that the value of is similar to the crossoverprobability used in the GA algorithm.

    III. MULTIMETHOD SEARCH: PRESERVINGPOPULATION DIVERSITY

    Preserving population diversity is crucial for successful andefficient search of complex multimodal response surfaces. Alack of diversity often results in premature convergence to a sub-optimal solution, whereas too much diversity often results in aninability to further refine solutions and more closely approxi-mate the location of the global optimum. In multiobjective opti-mization, population diversity is easily maintained because thetradeoff between the various objectives naturally induces diver-sity in the population, enabling the algorithm to locate multiple,Pareto optimal solutions. In contrast, maintaining diversity insingle-objective optimization is much more difficult because inthe pursuit of a single, optimal solution, an algorithm may relin-quish occupation of the regions of the search space with lowerfitness. This genetic drift, in which the population is inclinedto converge toward a single solution, is typical of many evolu-tionary search algorithms.

    To prevent the collapse of evolutionary algorithms into a rel-atively small basin of highest attraction, numerous approacheshave been devised to help preserve population diversity. Sig-nificant contributions in this direction are fitness sharing [16],crowding [25], local mating [8] and niching [30]. Many of thesemethods employ a distance measure on the solution space to en-sure that members of the population do not become too similarto one another. In this study, a modified version of the specia-tion method presented in [28] is developed to improve diversityand search capabilities. The algorithm is presented in Fig. 4 anddescribed below.

    At each generation, the algorithm takes as input the combinedpopulation with size 2 , sorted in order of decreasing fit-ness. Initially, the best member of (first element) is includedas first member in . Then, in an iterative procedure, the next in-dividual in is compared to the species currently present in .If the Euclidean distance of this individual to all points presentin is larger than a user-defined distance, , then this memberof will be added to . This process is repeated until all theslots in are filled, and the resulting population is of size .In this approach, the parameter controls the level of diversitymaintained in the solution.

    It is impossible to identify a single generic value of thatworks well for a range of different optimization problems. Cer-tain optimization problems require only a small diversity in thepopulation to efficiently explore the space (and thus small valuesfor ), whereas other problems require significant variation inthe population to prevent premature convergence to local op-timal solutions in multimodal fitness landscapes. To improvethe species selection mechanism, the value of is therefore se-quentially increased from to with a factor of 10,where . The upper and lower boundaries of thesearch space are fixed at initialization. For each different valueof an equal number of points is added to . This proce-dure preserves useful diversity in the population at both smalland large distances, and improves the robustness and efficiencyof AMALGAM-SO.

    Fig. 4. Illustration of species selection approach to select � from � . Thisprocedure prevents the collapse of AMALGAM-SO into a relatively small basinof highest attraction. The selection of � is explained in the text.

    IV. MULTIMETHOD SEARCH: STOPPING CRITERIA ANDBOUNDARY HANDLING

    To implement the restart feature of the AMALGAM-SOmethod, the stopping criteria listed below are used. The firstfive stopping criteria only apply to the CMA-ES, and are quitesimilar to those described and used in [2]. The second list ofstopping criteria applies to the GA/PCX/PSO/DE algorithms,and is different than those used for the CMA algorithm to beconsistent with a difference in search approach.

    CMA-ES:

    1) Stop if the range of the best objective function values ofthe last generations is zero, or the ratio ofthe range of the current function values to the maximumcurrent function value is below . In this study,

    is used.2) Stop if the standard deviation of the normal distribution is

    smaller than in all coordinates and the evolution pathfrom (2) in [21] is smaller than in all components. Inthis study, is used.

    3) Stop if adding a 0.1–standard deviation vector in aprincipal axis direction of does not change

    4) Stop if adding a 0.2-standard deviation in each coordinatedoes not change

    5) Stop if the condition number of the covariance matrixexceeds .

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

  • 250 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 2, APRIL 2009

    TABLE IMATHEMATICAL TEST FUNCTIONS USED IN THIS STUDY, INCLUDING A SHORT DESCRIPTION OF THEIR PECULIARITIES. A DETAILED DESCRIPTION OF

    THE INDIVIDUAL FUNCTIONS AND THEIR MATHEMATICAL EXPRESSIONS CAN BE FOUND IN [39]

    GA/PCX/PSO/DE-algorithms:

    1) Stop if the standard deviation of is smaller thanin all coordinates.

    2) Stop if the ratio of the range of the best function valuesfound so far (associated with ) to the maximumfunction value of is below .

    3) Stop if the acceptance rate of new points in is lowerthan AcceptRate. In this study, AcceptRate .

    4) Stop if the range of the best objective function values ofthe last generations is zero.

    The criteria above determine when to restart the algorithm.The overall stopping criteria of the AMALGAM-SO are: stopafter function evaluations or stop if the best objectivefunction value found so far is smaller than a predefined tolerancevalue.

    V. NUMERICAL EXPERIMENTS AND PROCEDURE

    The performance of the AMALGAM-SO algorithm is testedusing a selected set of standard test functions from the spe-cial session on real-parameter optimization of the IEEE Con-

    gress on Evolutionary Computations, CEC 2005. These func-tions span a diverse set of problem features, including multi-modality, ruggedness, noise in fitness, ill-conditioning, nonsep-arability, interdependence (rotation), and high-dimensionality,and are based on classical benchmark functions such as Rosen-brock’s, Rastrigin’s, Swefel’s, Griewank’s and Ackley’s func-tion. Table I presents a short description of these functions withtheir specific peculiarities. A detailed description of these func-tions appears in [39], and so will not be repeated here. In sum-mary, functions 1–5 are unimodal, functions 6–12 are multi-modal, and functions 13–25 are hybrid composition functions,implementing a combination of several well-known benchmarkfunctions.

    Each test function is solved in 10, 30, and 50 dimensions,using 25 independent trials to obtain statistically meaningfulresults. To prevent exploitation of symmetry of the searchspace, the global optimum is shifted to a value different fromzero [15]. In all our experiments, the initial population size ofAMALGAM-SO was set to 10, 15, and 20 in dimensions 10,30, and 50, respectively. The minimum number of children analgorithm needs to contribute was set to5% of the population size for the GA/PCX/PSO/DE methods,and to 25% of the population size for the CMA-ES. These

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

  • VRUGT et al.: SELF-ADAPTIVE MULTIMETHOD SEARCH FOR GLOBAL OPTIMIZATION IN REAL-PARAMETER SPACES 251

    numbers were found most productive for a diverse set of prob-lems. However, in the first trial with lowest population size, thevalue of is set to 80% of the population size, to improveefficiency of AMALGAM-SO on simple problems that do notrequire a restart run to find the global minimum within thedesired tolerance limit.

    For each function, a feasible search space ofof is prescribed at initialization. To make sure that the off-spring remains within these bounds, two different proceduresare implemented. For the GA/PCX/PSO/DE algorithms, pointsgenerated outside the feasible region are reflected back fromthe bound by the amount of the violation [33]. For the CMAalgorithm, the boundary handling procedure implemented inthe MATLAB code of the CMA-ES algorithm: (Version 2.35:http://www.bionik.tu-berlin.de/user/niko/formersoftwarever-sions.html) is used to penalize infeasible individuals.

    To evaluate the numerical performance of AMALGAM-SO,we use two different measures. Both these measures are pre-sented in [1]. The first criterion, hereafter referred to asmeasures the probability of success of AMALGAM-SO. Thismeasure simply calculates as the ratio between the number ofsuccessful runs, and the total number of runs ( in this par-ticular case). The second criterion, entitled measures theaverage number of function evaluations needed to solve a par-ticular optimization problem. This is calculated as

    (8)

    where signifies the expected number of function eval-uations of AMALGAM-SO during a successful run, which isidentical to the ratio between the number of function evalua-tions in a successful run and the total number of successful runs.A successful optimization run is defined as one in which theAMALGAM-SO algorithm achieves the fixed accuracy levelwithin function evaluations. Consistent with previouswork, was set to .

    VI. RESULTS AND DISCUSSION

    In this section, we present the results of our numerical ex-periments with AMALGAM-SO. In the first two sections, wepresent an analysis to determine which search algorithms to in-clude in AMALGAM-SO, followed by simulation results in di-mensions , 30 and 50 using the CEC-2005 test suiteof functions. In this part of the paper, we are especially con-cerned with a comparison analysis of AMALGAM-SO againstcurrent state-of-the-art evolutionary algorithms. The final twosections focus on the empirical efficiency of AMALGAM-SOby providing plots that illustrate scaling behavior for variousfunctions, and highlight the robustness of the method for a testfunction with noise induced multimodality.

    A. Which Search Algorithms to Include in AMALGAM-SO?

    Table II presents simulation results for benchmark functions1–15 of CEC-2005 for dimension , using various combi-nations of the CMA, GA, PCX, PSO, and DE evolutionary algo-rithms within AMALGAM-SO. The table summarizes the suc-cess rate, and average number of function evaluations neededto reach convergence, . These numbers represents averagesof 25 different optimization runs.

    The presented statistics in Table II highlight several importantobservations regarding algorithm selection. In the first place,notice that there is significant variation in performance of thevarious multimethod search algorithms. Some combinations ofsearch operators within AMALGAM-SO have excellent searchcapabilities and consistently solve many of the considered testfunctions, whereas other combinations of algorithms exhibit arelatively poor performance. For instance, the DE-PSO algo-rithm has an overall success rate of 0, consistently failing tolocate good quality solutions in all problems. On the contrary,the CMA-GA-DE algorithm demonstrates robust convergenceproperties across the entire range of test problems, with the ex-ception of function 8. This benchmark problem is simply too dif-ficult to be solved with current evolutionary search techniques.

    Second, among the algorithms included in this study, theCMA evolution strategy seems to be a necessary ingredientin our self-adaptive multimethod optimization algorithm. Theperformance of AMALGAM-SO with CMA is significantlybetter in terms of and than any other combination ofalgorithms not utilizing this method for population evolution.This finding is consistent with previous work demonstrating thesuperiority of a restart CMA-ES over other search approachesfor the unimodal and multimodal functions considered here[22]. This highlights that the success of AMALGAM-SOessentially relies on the quality of individual algorithms, thusmaintaining scope for continued effort to improve individualsearch operators.

    Third, increasing the number of adaptive strategies forpopulation evolution in AMALGAM-SO does not necessarilylead to better performance. For instance, the performance ofthe CMA-GA-PCX-DE-PSO variant of AMALGAM-SO issignificantly worse on the unimodal (in terms of number offunction evaluations) and multimodal test functions (successrate) than other, simpler strategies. It is evident, howeverthat the CMA-GA-DE and CMA-GA-PSO search strategiesgenerally exhibit the best performance. Both these methods,implement three individual algorithms for population evolution,and show nearly similar performance on the unimodal (1–6),and multimodal test functions (7–12). Their results have beenhighlighted in gray in Table II.

    Fourth, it is interesting to observe that similar search opera-tors as selected here for single objective optimization, were se-lected for population evolution in the multiobjective version ofAMALGAM [41]. In that study, a combination of the NSGA-II[12], DE, PSO, and adaptive covariance sampling [17] methodswas shown to have all the desirable properties to efficientlyevolve a population of individuals through the search space andfind a well distributed set of Pareto solutions. The similarity be-tween search operators selected in both studies suggests that itis conceivable to develop a single universal multimethod opti-mization algorithm that efficiently evolves a population of indi-viduals through single and multiobjective response surfaces.

    There are two main reasons that explain the relative poor per-formance of AMALGAM-SO in the absence of CMA for popu-lation evolution. First, the initial population of as usedin AMALGAM-SO is too small for the PSO, PCX, DE, and GAsearch strategies to be productive and appropriately search thespace. If , then convergence of these methods essentiallyrelies on the choice and efficiency of the mutation operator usedto increase diversity of the population, because the points liein an dimensional space. Even though the restart strategy

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

  • 252 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 2, APRIL 2009

    TABLE IIPERFORMANCE OF VARIOUS MULTIMETHOD SEARCH ALGORITHMS IN DIMENSION � � �� FOR TEST FUNCTIONS 1–15. THE TWO NUMBERS IN EACH COLUMNREPORT THE SUCCESS RATE �� � AND AVERAGE NUMBER OF FUNCTION EVALUATIONS NEEDED TO REACH CONVERGENCE �����, EXCEPT FOR FUNCTIONS13 AND 14 WHERE THE SECOND NUMBER (IN PARENTHESES) DENOTES THE MEAN FINAL FUNCTION VALUE. THE LISTED NUMBERS REPRESENT AVERAGES

    OVER 25 DIFFERENT OPTIMIZATION RUNS. A RUN IS DECLARED SUCCESSFUL IF IT REACHES THE PREDEFINED TOLERANCE LIMIT (IN COLUMN TOL)BEFORE � � �� FUNCTION EVALUATIONS

    implemented in AMALGAM-SO successively increases popu-lation size, it will take quite a number of function evaluationsto reach a population size, that is considered reason-able for the PSO, PCX, DE, and GA algorithms. For instance,a population size of at least individuals is commonlyused with population based global search methods. Starting outwith a larger initial population might resolve at least some con-vergence problems, but at the expense of requiring many morefunction evaluations to find solutions within the tolerance limitof the global minimum, when compared to the CMA-ES algo-rithm. The covariance adaptation strategy used in this method isdesigned in such a way that it still performs well whenand the population lies in a reduced space.

    Second, many of the test functions considered here are diffi-cult to solve because they contain numerous local optimal solu-tions. The population evolution strategy used in the PSO, PCX,DE, and GA evolutionary algorithms will exhibit difficulty inthe face of many local basins of attraction, because local optimalsolutions tend to persist in the population from one generation tothe next. This not only deteriorates the efficiency of the search,but also frequently results in premature convergence. Variouscontributions to the optimization literature have therefore pro-posed extensions such as fitness sharing, speciation, chaotic mu-tation, and use of simulated annealing for population evolutionto increase the robustness of the PSO, PCX, DE, and GA evolu-tionary algorithms on multimodal problems. The CMA-ES al-gorithm, on the contrary implements a different procedure forpopulation evolution that is less prone to stagnation in a localsolution. This method approximates the orientation and scale ofthe response surface indirectly by continuously morphing andadjusting the covariance matrix of the parameters to be esti-mated. The population in CMA-ES is always sampled from thisinformation, and it is therefore unlikely that exactly similar so-lutions will propagate from one generation to the next. This typeof search strategy is more robust and efficient when confronted

    TABLE IIIDEFAULT VALUES OF ALGORITHMIC PARAMETERS

    USED IN CMA-PSO-GA MULTIMETHOD SEARCH

    with difficult response surfaces that contain many local solu-tions. The CMA method is therefore indispensable for popula-tion evolution in AMALGAM-SO.

    To simplify further analysis in this paper, we carry theCMA-GA-PSO forward, as this version of AMALGAM-SOexhibits the highest success rate on composite function 15.The other multimethod search strategies discussed in Table IIare therefore eliminated from further consideration. Table IIIsummarizes the algorithmic parameters in the CMA-GA-PSOversion of AMALGAM-SO, including their default values as

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

  • VRUGT et al.: SELF-ADAPTIVE MULTIMETHOD SEARCH FOR GLOBAL OPTIMIZATION IN REAL-PARAMETER SPACES 253

    TABLE IVFUNCTIONS 1–17 IN DIMENSION � � ��: NUMBER OF FUNCTION EVALUATIONS (MIN, 7TH, MEDIAN, 19TH, MAXIMUM, MEAN AND STANDARD DEVIATION)NEEDED TO FIND A FUNCTION VALUE THAT IS WITHIN TOL FROM THE GLOBAL OPTIMUM. SUCCESS RATE �� � AND PERFORMANCE MEASURE SP1 HAVEBEEN DEFINED IN THE TEXT. BECAUSE NONE OF THE OPTIMIZATION RUNS HAVE CONVERGED FOR FUNCTIONS 8, 13, 14, 16, AND 17 (INDICATED WITH ANASTERISK), WE REPORT THE FINAL OBJECTIVE FUNCTION VALUES (IN PARENTHESES) FOR THESE FUNCTIONS. TO FACILITATE COMPARISON AGAINST THE

    STATE-OF-THE-ART, THE PERFORMANCE OF THE RESTART CMA-ES (FROM [2]) IS ALSO INCLUDED

    used in this study. As highlighted earlier, these default valuesare taken from standard implementations of the individualCMA, GA, and PSO algorithms in the literature. No attemptwas made to tune them to further improve the performance ofAMALGAM-SO on the test functions considered here.

    B. Simulation Results in 10, 30, and 50 Dimensions

    This section presents results of AMALGAM-SO for the con-sidered benchmark problems in dimensions 10, 30, and 50. Re-sults are tabulated in Tables IV–VI, and empirical distributionsof normalized success performance are compiled in Figs. 5–7.Statistics are provided only for functions 1–17; the other func-tions (18–25) remain unsolved, and similar performance of theCMA-GA-PSO algorithm is observed as with existing search al-gorithms.Tables IV–VI (dimensions , 30 and 50, respec-tively) present detailed information on the number of functionevaluations needed to reach the tolerance limit (defined in thecolumn labeled Tol), the success rate, and performance criteria

    defined in (8). For functions indicated with an asterisk, thefinal objective function values are reported because none of theoptimizations runs reached convergence within the maximumnumber of function evaluations. To benchmark the performanceof AMALGAM-SO, the last four columns in each table sum-marize the performance of the G-CMA-ES [2]. This method isincluded here, because it receives the best performance on thesevarious test functions, as demonstrated in previous comparisonstudies using CEC-2005 [22]. The following conclusions can bedrawn.

    1) For unimodal problems 1–6 and multimodal problems6 and 7, the performance of AMALGAM-SO and theG-CMA-ES are quite similar, although the G-CMA-ESis a bit more efficient in terms of the number of functionevaluations to reach convergence. Note however, that theG-CMA-ES fails to locate the global minimum for prob-lems 4 and 5 in case of , whereas AMALGAM-SOsuccessfully solves these problems as well.

    2) Function 8 remains unsolved in all dimensions: neither al-gorithm is able to locate the needle in the haystack. Perhaps

    the only strategy that will be successful for this problemis exhaustive uniform sampling of the search space. How-ever, this approach is computationally very demanding andinefficient.

    3) For all the other problems, the AMALGAM-SO method isgenerally superior to G-CMA-ES. This conclusion is truefor all dimensions. For test functions in which both al-gorithms are unable to find solutions within the tolerancelimit, the fitness values obtained with the AMALGAM-SOmethod are significantly smaller than those derived withthe G-CMA-ES. This is most notable for test functions 15and 17 in dimension 30 and 50, respectively.

    The results presented in Tables IV–VI demonstrate that for rel-atively simple problems, single search operators will generallysuffice and be efficient. In those cases, multimethod search is anoverkill, and therefore about 10%–15% less efficient in termsof the number of function evaluations than the G-CMA-ES. Formore complex optimization problems, however, self-adaptivemultimethod search seems to have desirable advantages as it canswitch between different operators to maintain efficiency androbustness of the search when confronted with multimodality,numerous local optima, significant nonlinearity, ruggedness,interdependence, etc. For these problems, a single operator issimply not powerful enough to efficiently and reliably handleall these difficulties.

    So far, we have compared the performance ofAMALGAM-SO against the G-CMA-ES only. Tocompare the performance of the AMALGAM-SO methodagainst a variety of other search techniques, consider Figs. 5and 6, which present empirical distribution functions of thesuccess performance of individual algorithms divided by

    of the best performing algorithm for the unimodal andmultimodal test functions in dimension (A) , and(B) . The different color lines distinguish betweenthe results of the BLX-GL50 [14], BLX-MA [31], CoEVO[32], DE [33], DMS-L-PSO [29], EDA [44], G-CMA-ES [2],K-PCX [36], L-CMA-ES [1], L-SaDE [43], and SPC-PNX [4]algorithms on the same functions. These various algorithms

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

  • 254 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 2, APRIL 2009

    TABLE VFUNCTIONS 1–17 IN DIMENSION � � ��: NUMBER OF FUNCTION EVALUATIONS (MIN, 7TH, MEDIAN, 19TH, MAXIMUM, MEAN AND STANDARD DEVIATION)

    NEEDED TO FIND A FUNCTION VALUE THAT IS WITHIN TOL FROM THE GLOBAL OPTIMUM. BECAUSE NONE OF THE OPTIMIZATION RUNS HAVE CONVERGED FORFUNCTIONS 8, 13, 14, 16, AND 17 (INDICATED WITH AN ASTERISK), WE REPORT THE FINAL OBJECTIVE FUNCTION VALUES (IN PARENTHESES) FOR THESE

    FUNCTIONS. PERFORMANCE OF THE RESTART CMA-ES (FROM [2]) IS ALSO INCLUDED

    TABLE VIFUNCTIONS 1–17 IN DIMENSION � � ��: NUMBER OF FUNCTION EVALUATIONS (MIN, 7TH, MEDIAN, 19TH, MAXIMUM, MEAN AND STANDARD DEVIATION)

    NEEDED TO FIND A FUNCTION VALUE THAT IS WITHIN TOL FROM THE GLOBAL OPTIMUM. BECAUSE NONE OF THE OPTIMIZATION RUNS HAVE CONVERGED FORFUNCTIONS 8, 13, 14, 16, AND 17 (INDICATED WITH AN ASTERISK), WE REPORT THE FINAL OBJECTIVE FUNCTION VALUES (IN PARENTHESES) FOR THESE

    FUNCTIONS. THE PERFORMANCE OF THE RESTART CMA-ES (FROM [2]) IS ALSO INCLUDED

    represent the current state-of-the-art, and are based oncommonly used search methods, such as the CMA, PSO,DE, and GA algorithms. Moreover, they represent a broadarray of optimization techniques, including single searchoperator (CoEVO, DE, EDA, G-CMA-ES, L-CMA-ES,K-PCX, SPC-PNX), and multimethod memetic (BLX-GL50,BLX-MA, DMS-L-PSO), and self-adaptive (L-SaDE)evolutionary algorithms. The performance of these algorithmson the CEC-2005 test bed of functions considered here hasbeen extracted from tables presented in [22]. The steeperthe cumulative distribution function (CDF), the better theperformance of an algorithm across the range of test functionsexamined. In other words, small values for the ratio ofto and large associated cumulative frequency valuesare preferred. Similar plots have been used in [22] to compareperformance of individual algorithms.

    The results presented in Figs. 5 and 6 demonstrate thatAMALGAM-SO receives consistently good performance overa large range of problems and dimensions. As demonstratedearlier, the G-CMA-ES restart method is slightly more efficientthan AMALGAM-SO for the unimodal problems of CEC-2005in dimension 10 and 30. The CDF of G-CMA-ES reachesits maximum of one at a smaller ratio of thanAMALGAM-SO, demonstrating a higher efficiency in solvingthese unimodal problems. This difference in efficiency ishowever, generally small and approximates about 10%–15% interms of function evaluations as determined previously fromthe numbers listed in Tables IV and V. On the contrary, for themultimodal test functions presented in Fig. 6(a) and (b), theempirical CDF of AMALGAM-SO is significantly sharper thanthe CDF of any of the other algorithms. Not only sharper, butAMALGAM-SO also successfully solves all problems consid-

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

  • VRUGT et al.: SELF-ADAPTIVE MULTIMETHOD SEARCH FOR GLOBAL OPTIMIZATION IN REAL-PARAMETER SPACES 255

    Fig. 5. Unimodal functions: Empirical distribution functions of ��� divided by ��� of the best performing algorithm for test functions 1–6 in dimensions(a) � � �� and (b) � � ��. Equation (8) describes how to derive ��� from the numbers listed in Tables IV–VI. Small values of ������� and large valuesfor the associated CDF are preferred.

    Fig. 6. Multimodal functions: Empirical distribution functions of ��� divided by ��� of the best performing algorithm for test functions 7–12 in dimensions(a) � � �� and (b) � � ��. Small values of ������� and large values for the associated CDF are preferred.

    Fig. 7. Empirical distribution functions of G-CMA-ES and AMALGAM-SOon test functions 1–15 in dimension � � ��. Values of �� and ��� arederived from Table VI.

    ered in these graphs, resulting in a cumulative value of 1. Theseresults illustrate superior performance of AMALGAM-SO overexisting algorithms in case of complex multimodal responsesurfaces. This is further evidenced in Table VI that comparesthe performance of the G-CMA-ES and AMALGAM-SO for

    in a way similar as done in the other Tables forand 30. Notice that AMALGAM-SO can successfully solveunimodal problems 4 and 5, and multimodal function 15. Thesefunctions have not been solved previously in this dimension.

    The advantages of AMALGAM-SO are further illustrated inFig. 7, that presents a graphical interpretation of the results pre-sented in Table VI by plotting the cumulative distribution func-tions of the G-CMA-ES and AMALGAM-SO methods for func-tions 1–15 for . This figure only compares AMALGAMto the G-CMA-ES, as the best results in this dimension havebeen obtained with this latter algorithm. The CDF derived forAMALGAM-SO is significantly steeper than its counterpart forthe restart G-CMA-ES, indicating both an improved efficiencyand robustness of AMALGAM-SO over previous methods. Thisis a significant result, and further highlights the advantages ofmultimethod search for solving complex multimodal and high-dimensional optimization problems.

    To explore why the AMALGAM-SO method achieves a con-sistently good performance, Fig. 8 presents diagnostic informa-tion on the behavior of the multimethod approach of geneticallyadaptive (or self-adaptive) offspring creation for test functions[Fig. 8(a)] 1 (unimodal), [Fig. 8(b)] 9 (multimodal), [Fig. 8(c)]13 (composite), and [Fig. 8(d)] 15 (composite) in dimension

    . The boxplots were derived by computing the averagefraction of children, each al-gorithm is allowed to contribute during offspring creation inAMALGAM-SO for each optimization run, using informationfrom all restarts runs, after which the boxplot is derived fromthe outcome of the 25 individual optimization trials. Hence, foreach separate optimization trial the fractions sum up to 1.

    For unimodal function 1, the boxplots fall on a single pointwithout any variability between the 25 different optimization

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

  • 256 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 2, APRIL 2009

    Fig. 8. Virtue of genetically or self-adaptive offspring creation within a multimethod search concept. Average fraction of children each individual search operatoris allowed to contribute to population evolution in AMALGAM-SO in dimension � � �� for test functions (a) 1, (b) 9, (c) 13, and (d) 15. The presented valuesdenote averages of all restart runs (if needed) and 25 independent optimization trials.

    trials. For this function, a single run is required to find solutionsthat are within the specified tolerance limit of the global op-timum. The mean of the boxplots simply represents the numberof children each algorithm is allowed to create in the first run,prior to any information about the response surface is collected.

    For multimodal function 9, the boxplots clearly illustrate theimportance of the GA algorithm for population evolution. Forthis particular problem, the CMA evolutionary search strategyis not very efficient, and needs to be augmented with othersearch operators to find the global minimum within the prede-fined number of function evaluations. This result illuminatesAMALGAM-SO’s superior performance over the G-CMA-ES,achieving a 100% success rate on this optimization problem.Note also that there is relative small dispersion around themean of the boxplots, reflecting that the 25 different trialsdepict similar information about the search space. Thus, theinferred peculiarities of the problem are very similar, irrespec-tive of the properties of the initial population, its size, and theoptimization trial.

    For composite function 13, a large variation appears in de-rived child fractions between the different optimization runs:small variations in initial population between restart runs andoptimization trials causes significant variations in the fractionof children each of the individual algorithms is allowed to con-tribute to the offspring population. This finding is characteristicfor complex fitness landscapes whose properties vary consider-ably within the considered search domain. In the presence of somany distinct difficulties, it is unlikely that a single genetic op-erator will be able to conduct an efficient search. Efficient searchresults for this optimization problem are obtained by the simul-taneous use of multiple search methods, with a selection methodto enable adaptation to the specific peculiarities at hand.

    Finally, for composite test function 15, both the CMA-ES andGA receive significant preference for population evolution. Thisexplains the relatively high success rate of a multimethod searchalgorithm for this particular problem , and provides

    an explanation why the restart G-CMA-ES alone has been un-able to locate solutions close to the global optimum. In this case,AMALGAM-SO uses the GA algorithm to “boost” the perfor-mance of CMA.

    Note that for all problems considered in Fig. 8, the PSO algo-rithm seems to receives very little emphasis. This might indicatethat the method is not very important for population evolution,and thus could be removed from AMALGAM-SO. However,a detailed comparison of the performance of CMA-GA withCMA-GA-PSO using the listed statistics presented previouslyin Table I, reveals that the PSO strategy is indispensable forpopulation evolution for composite function 15. Thus, it is notonly the weight of individual algorithms, but also the synergybetween various search procedures that improves the efficiencyand robustness of evolutionary optimization.

    C. Scaling Analysis

    The previous section considered only three different dimen-sions ( , 30, and 50) for the various benchmark functions.To better understand the scaling behavior of AMALGAM-SO,we follow the investigations of [20] and derive the relationshipbetween dimensionality of the test problem, and the averagenumber of function evaluations needed to find solutions withthe tolerance limit. Fig. 9 presents the results of this analysisfor test problems 1, 6, 9, and 11. These graphs were generatedusing 100 different optimization trials for each individual func-tion and allow a rough determination of the measured efficiencyof AMALGAM-SO. The presented results are representativefor the other functions as well, and therefore we only presenta subset that covers the range of difficulties involved. These re-sults were obtained using , i.e., the size of the initial pop-ulation used in AMALGAM-SO is similar to the dimensionalityof the problem. Note that the depicted lines illustrate an approx-imate linear scaling of AMALGAM-SO between and

    for each of the individual test functions. This is an

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

  • VRUGT et al.: SELF-ADAPTIVE MULTIMETHOD SEARCH FOR GLOBAL OPTIMIZATION IN REAL-PARAMETER SPACES 257

    Fig. 9. Average number of function evaluations versus the problem dimensionfor test functions 1, 6, 9, and 11 of CEC-2005. Results correspond to an initialpopulation size of� � � using 100 different optimization trials. Dots representthe observations that are connected with the line � � � to facilitate graphicalinterpretation and understand the complexity order. The optimized values for �and � are given in the legend. The error bars represent the standard deviationsof each individual observation.

    encouraging result, demonstrating that the algorithm is efficientand scales well with increasing dimension. In addition, the pre-sented lines appear quite smooth, suggesting that our multi-method search strategy is well designed.

    D. Test Function With Noise Induced Multimodality

    The test functions considered so far have been restricted todeterministic problems whose objective function returns thesame value each time the same is evaluated. Functions 4 and17 explicitly include a noise term, but this noise vanishes whenapproaching the global minimum. Many real-world problems,however exhibit some form of noise in the estimation of theobjective function that is persistent and can significantly alterthe shape of the response surface. That is, given a fixed param-eter vector , the resulting objective function, becomesa random variate. Such problems can confuse optimizationalgorithms considerably. Some functions can even changetheir topology from unimodal to multimodal depending on thenoise strength, . In the optimization literature, these functionshave been called functions with noise induced multimodality(FNIM). In this final section, we test the robustness and effi-ciency of AMALGAM-SO on this class of problems.

    We consider test function that is described in detail in [35]

    (9)

    where

    (10)

    Fig. 10. Sampled �� � � � parameter space with AMALGAM-SO for testfunction � using � � �, � � �, � � ��, � � �, � � , and � .The black circle depicts the true optimum, whereas the gray circle illustrates theaverage radial distance estimated with CMA-ES.

    and

    (11)

    with values of , , , , ,and to result in significant multimodality. The globaloptimizer is found at

    (12)

    where are located on a hypersphere of radiusand origin at in the

    subspace. Results presented in [7] demonstrate that this func-tion cannot be adequately solved with CMA-ES. This studytherefore provides an excellent test case for AMALGAM-SO.Fig. 10 shows the sampled parameter space withAMALGAM-SO using 10 different optimization trials with atotal of 20 000 function evaluations each. The graph depictsthe results for the final 10% of the states sampled in eachindividual run. The black line denotes the global minimum inthe space, whereas the grey line represents the bestapproximation found with the CMA-ES, as reproduced from[7].

    The presence of noise in avoids AMALGAM-SO toconverge to a single best solution, but instead to find multipledifferent solutions. These solutions center closely around theglobal minimum, and occupy the entire circle quite evenly.The average radial distance of the AMALGAM-SO sampledsolutions to the origin is about 3.29, which is in very closeagreement with the optimum value radius of andmuch better than the average distance of approximately 4.3 de-rived with the CMA-ES. Our numerical results demonstrate thatthe GA generates about 50% of the offspring during the searchfor this particular function, explaining why AMALGAM-SOfinds solutions that more closely approximate the true op-timum. These findings provide further support for the claimthat self-adaptive multimethod search has important advantagesover search methods that use a single operator for populationevolution.

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

  • 258 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 2, APRIL 2009

    VII. CONCLUSION

    In the last few decades, the field of evolutionary computationhas produced a large number of algorithms to solve complex,real world search and optimization problems. Most of these al-gorithms implement a single universal operator for populationevolution. In this paper, we have shown that improvements tothe efficiency of evolutionary search can be made by adoptinga concept of self-adaptive, multimethod optimization. A hy-brid multimethod optimization framework has been presentedthat can handle a wide variety of response surfaces efficiently.This approach is called A MultiAlgorithm Genetically AdaptiveMethod for Single Objective Optimization (AMALGAM-SO),to evoke the image of a search algorithm that blends the at-tributes of the best available individual optimization methods.A comparative study of different combinations of commonlyused evolutionary algorithms has demonstrated that the best per-formance of AMALGAM-SO is obtained when the covariancematrix adaptation (CMA) evolution strategy, genetic algorithm(GA), and particle swarm optimizer (PSO) are simultaneouslyused for population evolution. Benchmark results in 10, 30, and50 dimensions using synthetic functions from the special ses-sion on real-parameter optimization of CEC 2005 show thatAMALGAM-SO obtains similar efficiencies as existing algo-rithms on relatively simple problems, but is increasingly supe-rior for more complex higher dimensional multimodal optimiza-tion problems. In addition, AMALGAM-SO scales well withincreasing number of dimensions, and converges in the closeproximity of the global minimum for functions with noise in-duced multimodality

    The results presented herein demonstrate that self-adap-tive multimethod optimization uncovers important new waysto further improve the robustness and efficiency of evolu-tionary search. Of course, the NFL theorem will still holdfor AMALGAM-SO (as was demonstrated on the simplerproblems), but initial results in this paper are very encouraging,demonstrating a significant speed up in efficiency for morecomplex (multimodal) higher dimensional optimization prob-lems. The success of a multimethod search concept, howeveressentially relies on the quality and efficiency of individualalgorithms. Thus, there is sufficient scope to further improveindividual search operators.

    In this study, we have not made any attempt to tune the al-gorithmic parameters in the AMALGAM-SO search algorithm.Instead, the various evolutionary methods were implementedusing default values for their algorithmic parameters. It seemslikely that further efficiency improvements of AMALGAM-SOcan be made if we optimize the values of the algorithmic param-eters in AMALGAM-SO using the outcome of the test suite offunctions considered in this study. Moreover, theoretical anal-ysis is needed to increase understanding why some combina-tions of search algorithms exhibit good convergence properties,and others show poor compatibility. We hope that the analysispresented in this paper stimulates theoreticians to develop newtheories for self-adaptive multimethod search algorithms. Thesource code of AMALGAM-SO is written in MATLAB and canbe obtained from the first author upon request.

    ACKNOWLEDGMENT

    The authors highly appreciate the computer support providedby the SARA Center For Parallel Computing at the University

    of Amsterdam, The Netherlands. The constructive comments ofthree reviewers have greatly improved the current manuscript.

    REFERENCES

    [1] A. Auger and N. Hansen, “Performance evaluation of an advanced localsearch evolutionary algorithm,” in Proc. IEEE Congr. Evol. Comput.,Edinburgh, U.K., 2005, vol. 1, pp. 1777–1784.

    [2] A. Auger and N. Hansen, “A restart CMA evolution strategy with in-creasing population size,” in Proc. IEEE Congr. Evol. Comput., Edin-burgh, U.K., 2005, pp. 1769–1776.

    [3] T. Bäck, “Self-adaptation,” in Handbook of Evolutionary Computation,T. Bäck, T. Fogel, and Z. Michalewicz, Eds. New York: Oxford Univ.Press, 1997, pp. 1–15.

    [4] P. J. Ballester, J. Stephenson, J. N. Carter, and K. Gallagher, “Real-pa-rameter optimization performance study on the CEC-2005 benchmarkwith SPC-PNX,” in Proc. 2005 IEEE Congr. Evol. Comput., Edin-burgh, U.K., 2005, vol. 1, pp. 498–505.

    [5] A. S. S. M. U. Barkat, R. Sarker, D. Comfort, and C. Lokan, “An agent-based memetic algorithm (AMA) for solving constrained optimizationproblems,” in Proc. IEEE Congr. Evol. Comput., Singapore, 2007, pp.999–1006.

    [6] H. G. Beyer and K. Deb, “On self-adaptive features in real-parameterevolutionary algorithms,” IEEE Trans. Evol. Comput., vol. 5, no. 3, pp.250–270, Jun. 2001.

    [7] H. G. Beyer and B. Sendhoff, “Functions with noise-inducedmulti-modality: A test for evolutionary robust optimization—Proper-ties and performance analysis,” IEEE Trans. Evol. Comput., vol. 10,pp. 507–526, Oct. 2006.

    [8] R. J. Collins and D. R. Jefferson, “Selection in massively parallelgenetic algorithms,” in Proc. 4th Int. Conf. Genetic Algorithms, SanMateo, CA, 1991.

    [9] K. Deb and R. B. Agrawal, “Simulated binary crossover for continuoussearch space,” Complex Syst., vol. 9, pp. 115–148, 1995.

    [10] K. Deb, Multi-Objective Optimization Using Evolutionary Algo-rithms. Chicester, U.K.: Wiley, 2001.

    [11] K. Deb, A. Anand, and D. Joshi, “A computationally efficient evolu-tionary algorithm for real-parameter optimization,” Evol. Comput., vol.10, pp. 371–395, 2002.

    [12] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitistmultiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol.Comput., vol. 6, no. 2, pp. 182–197, Apr. 2002.

    [13] R. Eberhart and J. Kennedy, “A new optimizer using particle swarmtheory,” in Proc. 6th Int. Symp. Micro Mach. Human Sci., Nagoya,Japan, 1995, pp. 39–43.

    [14] C. Garcia-Martinez and M. Lozano, “Hybrid real-coded genetic al-gorithms with female and male differentiation,” in Proc. 2005 IEEECongr. Evol. Comput., Edinburgh, U.K., 2005, vol. 1, pp. 896–903.

    [15] D. B. Fogel and H. G. Beyer, “A note on the empirical evaluation ofintermediate recombination,” Evol. Comput., vol. 4, no. 3, pp. 491–495,1995.

    [16] D. E. Goldberg and J. Richardson, “Genetic algorithms with sharingfor multi-modal function optimization,” in Proc. 2nd Int. Conf. GeneticAlgorithms Applicat., Hillsdale, NJ, 1987, pp. 41–49.

    [17] H. Haario, E. Saksman, and J. Tamminen, “An adaptive metropolis al-gorithm,” Bernoulli, vol. 7, pp. 223–242, 2001.

    [18] N. Hansen and A. Ostermeier, “Adapting arbitrary normal mutationdistributions in evolution strategies: The covariance matrix adapta-tion,” in Proc. IEEE Int. Conf. Evol. Comput., Nagoya, Japan, 1996,pp. 312–317.

    [19] N. Hansen and A. Ostermeier, “Completely derandomized self-adapta-tion in evolution strategies,” Evol. Comput., vol. 9, no. 2, pp. 159–195,Jun 2001.

    [20] N. Hansen, S. D. Müller, and P. Koumoutsakos, “Reducing the timecomplexity of the derandomized evolution strategy with covariancematrix adaptation (CMA-ES),” Evol. Comput., vol. 11, no. 1, pp. 1–18,Mar. 1, 2003.

    [21] N. Hansen and S. Kern, “Evaluating the CMA evolution strategy onmultimodal test functions,” in Lecture Notes in Computer Science, X.Yao, Ed. et al. Berlin, Germany: Springer-Verlag, 2004, vol. 3242,Parallel Problem Solving from Nature—PPSN VIII, pp. 282–291.

    [22] N. Hansen, Compilation Results 2005 CEC Benchmark Function Set2006, pp. 1–14 [Online]. Available: http://www.ntu.edu.sg/home/epn-sugan/index_files/CEC-05/compareresults.pdf

    [23] W. E. Hart, N. Krasnogor, and J. E. Smith, Recent Advances in MemeticAlgorithms. Berlin, Germany: Springer-Verlag, 2005.

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.

  • VRUGT et al.: SELF-ADAPTIVE MULTIMETHOD SEARCH FOR GLOBAL OPTIMIZATION IN REAL-PARAMETER SPACES 259

    [24] F. Herrera, M. Lozano, and J. L. Verdegay, “Tackling real-coded ge-netic algorithms: Operators and tools for behavioral analysis,” Artif.Intell. Rev., vol. 12, no. 4, pp. 265–319, Nov. 1998.

    [25] K. A. D. Jong, “Analysis of the behavior of a class of genetic adaptivesystems,” Dissertation Abstr. Int., vol. 36, no. 10, pp. 5140B–5140B,1975.

    [26] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proc.IEEE Int. Conf. Neural Networks, 1995, pp. 1942–1948.

    [27] H. Kita, I. Ono, and S. Kobayashi, “Multi-parental extension of theunimodal normal distribution crossover for real-coded genetic algo-rithms,” in Proc. 1999 Congr. Evol. Comput., Piscataway, NJ, 1999,pp. 1581–1587.

    [28] X. Li, “Efficient differential evolution using speciation for multimodalfunction optimization,” in Proc. 2005 Conf. Genetic Evol. Comput.,Washington, DC, 2005, pp. 873–880.

    [29] J. J. Liang and P. N. Suganthan, “Dynamic multi-swarm particleswarm optimizer with local search,” in Proc. 2005 IEEE Congr. Evol.Comput., Edinburgh, U.K., 2005, vol. 1, pp. 522–528.

    [30] S. W. Mahfoud, “Niching methods for genetic algorithms,” Ph.D. dis-sertation, Univ. Illinois at Urbana-Champaign, Urbana, IL, 1995.

    [31] D. Molina, F. Herrera, and M. Lozano, “Adaptive local search param-eters for real-coded memetic algorithms,” in Proc. 2005 IEEE Congr.Evol. Comput., Edinburgh, U.K., 2005, vol. 1, pp. 888–895.

    [32] P. Posik, “Real parameter optimization using mutation step co-evolu-tion,” in Proc. IEEE Congr. Evol. Comput., Edinburgh, U.K., 2005, vol.1, pp. 872–879.

    [33] J. Rönkkönen, S. Kukkonen, and K. V. Price, “Real-parameter opti-mization with differential evolution,” in Proc. 2005 IEEE Congr. Evol.Comput., Edinburgh, U.K., 2005, vol. 1, pp. 506–513.

    [34] H. P. Schwefel, “Collective intelligence in evolving systems,” in Eco-dynamics—Contributions to Theoretical Ecology, W. Wolff, C. Soeder,and F. Drepper, Eds. Berlin, Germany: Springer-Verlag, 1988, pp.95–100.

    [35] B. Sendhoff, H. G. Beyer, and M. Olhofer, “On noise induced multi-modality in evolutionary algorithms,” in Proc. 4th Asia-Pacific Conf.Simulated Evolution Learning—SEAL, L. Wang, K. Tan, T. Furuhashi,J. Kim, and F. Sattar, Eds., 2002, vol. 1, pp. 219–224.

    [36] A. Sinha, S. Tiwari, and K. Deb, “A population-based, steady-state pro-cedure for real-parameter optimization,” in Proc. 2005 IEEE Congr.Evol. Comput., Edinburgh, U.K., 2005, vol. 1, pp. 514–521.

    [37] R. Storn and K. Price, “Differential evolution—A fast and efficientheuristic for global optimization over continuous spaces,” J. GlobalOptim., vol. 11, pp. 341–359, 1997.

    [38] R. Storn and K. Price, Differential Evolution—A Simple and EfficientAdaptive Scheme For Global Optimization Over Continuous Spaces.Berkeley, CA: Int. Comput. Sci. Inst., 1995.

    [39] P. N. Suganthan, N. Hansen, J. J. Liang, K. Deb, Y. P. Chen, A. Auger,and S. Tiwari, Problem Definitions and Evaluation Criteria For theCEC-2005 Special Session on Real-Parameter Optimization. Singa-pore: Nanyang Technol. Univ., 2005, pp. 1–50.

    [40] J. Vesterström and R. Thomsen, “A comparative study of differentialevolution, particle swarm optimization, and evolutionary algorithms onnumerical benchmark problems,” in Proc. 2004 Congr. Evol. Comput.,Portland, OR, 2004, vol. 2, pp. 1980–1987.

    [41] J. A. Vrugt and B. A. Robinson, “Improved evolutionary optimizationfrom genetically adaptive multimethod search,” Proc. Nat. Acad. Sci.USA, vol. 104, pp. 708–711, 2007.

    [42] D. H. Wolpert and W. G. Macready, “No free lunch theorems for op-timization,” IEEE Trans. Evol. Comput., vol. 1, no. 1, pp. 67–82, Apr.1997.

    [43] A. K. Qin and P. N. Suganthan, “Self-adaptive differential evolution al-gorithm for numerical optimization,” in Proc. 2005 IEEE Congr. Evol.Comput., Edinburgh, U.K., 2005, vol. 2, pp. 1785–1791.

    [44] B. Yuan and M. Gallagher, “Experimental results for the special ses-sion on real-parameter optimization at CEC 2005: A simple, contin-uous EDA,” in Proc. 2005 IEEE Congr. Evol. Comput., Edinburgh,U.K., 2005, vol. 2, pp. 1792–1799.

    Jasper A. Vrugt received the M.S. degree in 1999and the Ph.D. degree in 2004 (both cum laude)from the University of Amsterdam, Amsterdam, TheNetherlands.

    He joined the Los Alamos National Laboratory(LANL), Los Alamos, NM, in March 2005 as aDirector’s Postdoctoral Fellow. Currently, he isa J. Robert Oppenheimer Fellow, holding a jointappointment between the Earth and EnvironmentalSciences Division (EES-6) and the TheoreticalDivision (T-7). His research focuses on advanced

    optimization, data assimilation and uncertainty estimation methods applied tocomputationally intensive, high-dimensional models, with applications in awide variety of fields, including weather forecasting, hydrology, transport inporous media, and ecohydrology.

    Bruce A. Robinson is a Program Manager at the LosAlamos National Laboratory, Los Alamos, NM, andformer staff member and Deputy Group Leader ofthe Hydrology, Geochemistry, and Geology Group.His research interests include numerical modelingof complex flow and transport systems, transportof heat, fluids, and chemicals in porous media, andapplication and development of optimization, dataassimilation, and uncertainty estimation methods inenvironmental modeling.

    James M. Hyman is Group Leader of the Mathe-matical Modeling and Analysis Group (T-7) in theTheoretical Division, Los Alamos National Labo-ratory, Los Alamos, NM, and former President ofthe Society for Industrial and Applied Mathematics(SIAM). His research interests include buildinga solid mathematical foundation for differenceapproximations to partial differential equations andusing mathematical models to better understandand predict the spread of epidemics. Most of hispublications are in mathematical modeling and he

    has an active passion for writing quality software for numerical differentiation,interface tracking, adaptive grid generation, and the iterative solutions ofnonlinear systems.

    Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on February 14,2010 at 13:01:17 EST from IEEE Xplore. Restrictions apply.