Evolutionary Constrained Optimization

Embed Size (px)

DESCRIPTION

This book discusses all the constrained handling techniques in great details.

Citation preview

  • Infosys Science Foundation Seriesin Applied Sciences and Engineering

    Evolutionary Constrained Optimization

    Rituparna Datta Kalyanmoy Deb Editors

  • Infosys Science Foundation Series

    Applied Sciences and Engineering

  • More information about this series at http://www.springer.com/series/13554

  • Rituparna Datta Kalyanmoy DebEditors

    Evolutionary ConstrainedOptimization

    123

  • EditorsRituparna DattaDepartment of Electrical EngineeringKorea Advanced Institute of Scienceand Technology

    DaejeonRepublic of Korea

    Kalyanmoy DebElectrical and Computer EngineeringMichigan State UniversityEast Lansing, MIUSA

    ISSN 2363-4995 ISSN 2363-5002 (electronic)Applied Sciences and EngineeringISBN 978-81-322-2183-8 ISBN 978-81-322-2184-5 (eBook)DOI 10.1007/978-81-322-2184-5

    Library of Congress Control Number: 2014957133

    Springer New Delhi Heidelberg New York Dordrecht London Springer India 2015This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or partof the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmissionor information storage and retrieval, electronic adaptation, computer software, or by similar ordissimilar methodology now known or hereafter developed.The use of general descriptive names, registered names, trademarks, service marks, etc. in thispublication does not imply, even in the absence of a specific statement, that such names are exemptfrom the relevant protective laws and regulations and therefore free for general use.The publisher, the authors and the editors are safe to assume that the advice and information in thisbook are believed to be true and accurate at the date of publication. Neither the publisher nor theauthors or the editors give a warranty, express or implied, with respect to the material containedherein or for any errors or omissions that may have been made.

    Printed on acid-free paper

    Springer (India) Pvt. Ltd. is part of Springer Science+Business Media (www.springer.com)

    ISSN 2363-6149 ISSN 2363-6157 (electronic)Infosys Science Foundation Series

  • To my parents, Ranjit Kumar Datta (father)and Khela Datta (mother).

    Rituparna Datta

    To Sadhan Chandra Deb (Baro Mama,Eldest Uncle) whose inspiration has alwaysshown me the way.

    Kalyanmoy Deb

  • Preface

    Optimization is an integral part of research in most scientic and engineeringproblems.

    The critical challenge in optimization lies in iteratively nding the bestcombination of variables which minimize or maximize one or more objectivefunctions by satisfying the variable requirements and restrictions which are largelyknown as constraints. Most optimization problems involve one or many constraintsdue to the limitation in the availability of resources, physical viability, or otherfunctional requirements. The existence of constraints in problems in science andengineering is continuously motivating researchers to develop newer and moreefcient methods of constraint handling in optimization.

    Evolutionary optimization algorithms are population-based metaheuristictechniques to deal with optimization problems. These algorithms have beensuccessively applied to a wide range of optimization problems due to their ability todeal with nonlinear, nonconvex, and discontinuous objective and constraintfunctions. Originally, evolutionary algorithms (EAs) were developed to solveunconstrained problems. However, as demands for solving practical problemsarose, evolutionary algorithm researchers have been regularly devising newand efcient constraint handling techniques. Out of these constraint handlingtechniques, some are borrowed from the classical literature, while others use dif-ferent strategies like preference of feasible solutions over infeasible ones, choice ofless constraint-violated solutions, separation of objective and constraint functions,special operators, and hybrid classical evolutionary methods, to name a few.

    In most top evolutionary computation conferences, a good number of papers areregularly published to discuss various ways of handling constraints using differentEAs. Almost all books and journals on evolutionary computation consist one or manytopics on constrained optimization. In 2009, Springer Studies in ComputationalIntelligence came up with a full monograph on EA-based constrained optimization(Constraint-Handling in Evolutionary Optimization by Mezura-Montes; ISBN: 978-3-642-00618-0). This book takes the same direction as that monograph, and presentsa more updated view of the subject matter. Moreover, this book aims to serve as aself-contained collection of the current research addressing general constrained

    vii

  • optimization. The book can also serve as a textbook for advanced courses and as aguide to the future direction of research in the area. Many constraint handlingtechniques that exist in bits and pieces are assembled together in the presentmonograph. Hybrid optimization, which is gaining a lot of popularity today due to itscapability of bridging the gap between evolutionary and classical optimization isbroadly covered here. These areas will be helpful for researchers, novices and expertsalike.

    The book consists of ten chapters covering diverse topics of constrainedoptimization using EAs.

    Helio J.C. Barbosa, Afonso C.C. Lemonge, and Heder S. Bernardino review theadaptive penalty techniques in therst chapter thatmainly dealswith constraints usingEAs. The penalty function approach is one of the most popular constraint handlingmethodologies due to its simple working principle and its ease of integration withany unconstrained technique. The study also indicates the need for implementation ofdifferent adaptive penalty methods in a single search engine. It will facilitate betterinformation for the decision maker to choose a particular technique.

    The theoretical understanding of constrained optimization is one of the keyfeatures to select the best constraint handling mechanism for any problem.

    To tackle this issue, Shayan Poursoltan and Frank Neumann have studied theinfluence of tness landscape in Chap. 2. The study introduces different methods toquantify the ruggedness of a given constrained optimization problem.

    Rommel G. Regis proposes a constraint handling method to solve computa-tionally expensive constrained black-box optimization using surrogate-assistedevolutionary programming (EP) in Chap. 3. The proposed algorithm createssurrogates model for the black-box objective function and inequality constraintfunctions in every generation of the EP. Furthermore, at the end of each generationa trust-region-like approach is used to rene the best solution. Hard and softconstraints are common in constrained optimization problems.

    In Chap. 4, Richard Allmendinger and Joshua Knowles point out a new type ofconstraint known as ephemeral resource constraints (ERCs). The authors haveexplained the presence of ERCS in real-world optimization problems.

    A combination of multi-membered evolution strategy and an incrementalapproximation strategy-assisted constraint handling method is proposed by Sang-houn Oh and Yaochu Jin in Chap. 5 to deal with highly constrained, tiny andseparated feasible regions in the search space. The proposed approach generates anapproximate model for each constraint function with increasing order of accuracy.It starts with a linear model and consecutively reaches to the complexity similar tothe original constraint function.

    Chapter 6, by Tetsuyuki Takahama and Setsuko Sakai, describes a methodcombining the e-constrained method and the estimated comparison. In this method,rough approximation is utilized to approximate both the objective function as wellas constraint violation. The methodology is integrated with differential evolution(DE) for its simple working principle and robustness.

    viii Preface

  • Jeremy Porter and Dirk V. Arnold carry out a detailed analysis of the behavior of amulti-recombinative evolution strategy that highlights both cumulative step sizeadaptation and a simple constraint handling technique in Chap. 7. In order to obtainthe optimal solution at the cones apex, a linear optimization problem is considered foranalysis with a feasible region dened by a right circular cone, which is symmetricabout the gradient direction.

    A niching technique is explored in conjunction with multimodal optimization byMohammad Reza Bonyadi and Zbigniew Michalewicz in Chap. 8 to locate feasibleregions, instead of searching for different local optima. Since in continuousconstrained optimization, feasible search space is more likely to appear with manydisjoint regions, the global optimal solution might be located within any oneof them. A particle swarm optimization is used as search engine.

    In Chap. 9, Rammohan Mallipeddi, Swagatam Das, and Ponnuthurai NagaratnamSuganthan present an ensemble of constraint handling techniques (ECHT). Due tothe nonexistence of a universal constraint handling method, an ensemble method canbe a suitable alternative. ECHT is collated with an improved (DE) algorithm and theproposed technique is known as EPSDE.

    Rituparna Datta and Kalyanmoy Deb propose an adaptive penalty functionmethod using genetic algorithms (GA) in the concluding chapter (Chap. 10) of thisbook. The proposed method amalgamates a bi-objective evolutionary approachwith the penalty function methodology in order to overcome individual weakness.The bi-objective approach is responsible for the approximation of appropriatepenalty parameter and the starting solution for the unconstrained penalized functionby a classical method, which is responsible for exact convergence.

    We would like to thank the team at Springer. In particular we acknowledge thecontributions of our Editor, Swati Meherishi, and the editorial assistants, KamyaKhatter and Aparajita Singh, who helped bring this manuscript to fruition.Rituparna Datta would like to thank his wife Anima and daughter Riddhi for theirlove and affection.

    Daejeon, Korea, September 2014 Rituparna DattaEast Lansing, MI, USA Kalyanmoy Deb

    Preface ix

  • Acknowledgments to Reviewers

    With deep gratitude we convey our heartfelt greetings and congratulations to thefollowing colleagues and key researchers who spared no pains for reviewing thisbook to make it a signal success.

    Richard Allmendinger, University College London, UKDirk Arnold, Dalhousie University, CanadaHelio J.C. Barbosa, Universidade Federal de Juiz de Fora, BrazilHeder S. Bernardino, Laboratorio Nacional de Computacao Cientica, BrazilHans-Georg Beyer, FH Vorarlberg, University of Applied Sciences, AustriaFernanda Costa, University of Minho, PortugalDilip Datta, Tezpur University, IndiaOliver Kramer, University of Oldenburg, GermanyAfonso Celso de Castro Lemonge, Federal University of Juiz de Fora, BrazilXiaodong Li, RMIT University, AustraliaRammohan Mallipeddi, Kyungpook National University, South KoreaTomasz Oliwa, Toyota Technological Institute at Chicago, USAKhaled Rasheed, University of Georgia, USARommel G. Regis, Saint Josephs University, USA

    xi

  • Contents

    1 A Critical Review of Adaptive Penalty Techniquesin Evolutionary Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Helio J.C. Barbosa, Afonso C.C. Lemongeand Heder S. Bernardino

    2 Ruggedness Quantifying for Constrained ContinuousFitness Landscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29Shayan Poursoltan and Frank Neumann

    3 Trust Regions in Surrogate-Assisted EvolutionaryProgramming for Constrained Expensive Black-BoxOptimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Rommel G. Regis

    4 Ephemeral Resource Constraints in Optimization. . . . . . . . . . . . . 95Richard Allmendinger and Joshua Knowles

    5 Incremental Approximation Models for ConstrainedEvolutionary Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135Sanghoun Oh and Yaochu Jin

    6 Efcient Constrained Optimization by the ConstrainedDifferential Evolution with Rough Approximation . . . . . . . . . . . . 157Tetsuyuki Takahama and Setsuko Sakai

    7 Analyzing the Behaviour of Multi-recombinative EvolutionStrategies Applied to a Conically Constrained Problem . . . . . . . . 181Jeremy Porter and Dirk V. Arnold

    xiii

  • 8 Locating Potentially Disjoint Feasible Regionsof a Search Space with a Particle Swarm Optimizer . . . . . . . . . . . 205Mohammad Reza Bonyadi and Zbigniew Michalewicz

    9 Ensemble of Constraint Handling Techniques for SingleObjective Constrained Optimization . . . . . . . . . . . . . . . . . . . . . . 231Rammohan Mallipeddi, Swagatam Dasand Ponnuthurai Nagaratnam Suganthan

    10 Evolutionary Constrained Optimization: A Hybrid Approach . . . . 249Rituparna Datta and Kalyanmoy Deb

    About the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315

    Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317

    xiv Contents

  • About the Editors

    Rituparna Datta is a postdoctoral research fellow with the Robot IntelligenceTechnology (RIT) Laboratory at the Korea Advanced Institute of Science andTechnology (KAIST). He earned his Ph.D. in Mechanical Engineering at IndianInstitute of Technology (IIT) Kanpur and thereafter worked as a Project Scientist inthe Smart Materials, Structures, and Systems Lab at IIT Kanpur. His current researchwork involves investigation of Evolutionary Algorithms-based approaches to con-strained optimization, applying multiobjective optimization in engineering designproblems, memetic algorithms, derivative-free optimization, and robotics. He is amember of ACM, IEEE, and IEEE Computational Intelligence Society. He has beeninvited to deliver lectures in several institutes and universities across the globe,including Trinity College Dublin (TCD), Delft University of Technology(TUDELFT), University of Western Australia (UWA), University of Minho, Por-tugal, University of Nova de Lisboa, Portugal, University of Coimbra, Portugal, andIIT Kanpur, India. He is a regular reviewer of IEEE Transactions on EvolutionaryComputation, Journal of Applied Soft Computing, Journal of Engineering Optimi-zation, Journal of The Franklin Institute, and International Journal of ComputerSystems in Science and Engineering, and was in the program committee of Geneticand Evolutionary Computation Conference (GECCO 2014), iNaCoMM2013,GECCO 2013, GECCO 2012, GECCO 2011, eighth international conference onSimulated Evolution And Learning (SEAL 2010), international conference on mol-ecules to materials (ICMM-06), and some Indian conferences. He has also chairedsessions in ACODS 2014 and UKIERI Workshop on Structural Health Monitoring2012, GECCO 2011, IICAI 2011 to name a few. He was awarded an internationaltravel grant (Young Scientist) from Department of Science and Technology,Government of India, in July 2011 and June 2012 and travel grants from QueenslandUniversity, Australia, June 2012, GECCO Student Travel Grant, ACM, New York.

    xv

  • Prof. Kalyanmoy Deb is Koenig Endowed Chair Professor at the Department ofElectrical and Computer Engineering in Michigan State University (MSU), EastLansing, USA. He also holds a professor position at the Department of ComputerScience and Engineering, and at the Department of Mechanical Engineering inMSU. Prof. Debs main research interests are in genetic and evolutionary optimi-zation algorithms and their application in optimization, modeling, and machinelearning. He is largely known for his seminal research in developing and applyingEvolutionary Multi-objective Optimization. Prior to coming to MSU, he washolding an endowed chair professor position at Indian Institute of TechnologyKanpur, India, where he established KanGAL (http://www.iitk.ac.in/kangal) topromote research in genetic algorithms and multi-criterion optimization since 1997.His Computational Optimization and Innovation (COIN) Laboratory (http://www.egr.msu.edu/kdeb) at Michigan State University continues to act in the same spirit.He has consulted with various industries and software companies in the past.Prof. Deb was awarded the prestigious Infosys Prize in 2012, TWAS Prize inEngineering Sciences in 2012, CajAstur Mamdani Prize in 2011, JC BoseNational Fellowship in 2011, Distinguished Alumni Award from IIT Kharagpurin 2011, Edgeworth-Pareto award in 2008, Shanti Swarup Bhatnagar Prize inEngineering Sciences in 2005, Thomson Citation Laureate Award from Thomp-son Reuters. Recently, he has been awarded a Honarary Doctorate from Universityof Jyvaskyla, Finland. His 2002 IEEETEC NSGA-II paper is judged as the MostHighly Cited paper and a Current Classic by Thomson Reuters having more than4,200+ citations. He is a fellow of IEEE, ASME, Indian National Science Academy(INSA), Indian National Academy of Engineering (INAE), Indian Academy ofSciences (IASc), and International Society of Genetic and Evolutionary Compu-tation (ISGEC). He has written two text books on optimization and more than 375international journal and conference research papers with Google Scholar citationsof 65,000+ with h-index of 85. He is in the editorial board on 20 major internationaljournals. More information about his research can be found from http://www.egr.msu.edu/kdeb.

    xvi About the Editors

  • Chapter 1A Critical Review of Adaptive PenaltyTechniques in Evolutionary Computation

    Helio J.C. Barbosa, Afonso C.C. Lemonge and Heder S. Bernardino

    Abstract Constrained optimization problems are common in the sciences,engineering, and economics. Due to the growing complexity of the problems tackled,nature-inspired metaheuristics in general, and evolutionary algorithms in particular,are becoming increasingly popular. Asmove operators (recombination andmutation)are usually blind to the constraints, most metaheuristics must be equipped with aconstraint handling technique. Although conceptually simple, penalty techniquesusually require user-defined problem-dependent parameters, which often signifi-cantly impact the performance of a metaheuristic. A penalty technique is said tobe adaptive when it automatically sets the values of all parameters involved usingfeedback from the search process without user intervention. This chapter presents asurvey of the most relevant adaptive penalty techniques from the literature, identifiesthe main concepts used in the adaptation process, as well as observed shortcomings,and suggests further work in order to increase the understanding of such techniques.

    Keywords Adaptive techniques Penalty techniques Evolutionary computation

    1.1 Introduction

    Constrained optimization problems are common in the sciences, engineering, andeconomics. Due to the growing complexity of the problems tackled, nature-inspiredmetaheuristics in general, and evolutionary algorithms in particular, are becoming

    H.J.C. Barbosa (B)National Laboratory for Scientific ComputingLNCC, Petropolis, Rio de Janeiro, RJ, Brazile-mail: [email protected]

    A.C.C. LemongeDepartment of Applied and Computational Mechanics, Federal University of Juiz de Fora,Juiz de Fora, MG, Brazile-mail: [email protected]

    H.S. Bernardino H.J.C. BarbosaDepartment of Computer Science, Federal University of Juiz de Fora, Juiz de Fora, MG, Brazile-mail: [email protected]

    Springer India 2015R. Datta and K. Deb (eds.), Evolutionary Constrained Optimization,Infosys Science Foundation Series, DOI 10.1007/978-81-322-2184-5_1

    1

  • 2 H.J.C. Barbosa et al.

    increasingly popular. That is due to the fact that, in contrast to classical mathematicalprogramming techniques, they can be readily applied to situationswhere the objectivefunction(s) and/or constraints are not known as explicit functions of the decisionvariables. This happens when potentially expensive computer models (generated bymeans of the finite element method (Hughes 1987), for example) must be run in orderto compute the objective function and/or check the constraints every time a candidatesolution needs to be evaluated. For instance, in the design of truss structures, onepossible definition of the problem is to find the cross-section areas of the bars thatminimize the structures weight subject to limitations in the nodal displacements andin the stress of each bar (Krempser et al. 2012). Notice that although the structuresweight can be easily calculated from the design variables, the values of the nodaldisplacements and of the stress in each bar are determined by solving the equilibriumequations defined by the finite element model.

    As move operators (recombination and mutation) are usually blind to theconstraints (i.e., when operating upon feasible individual(s) they do not necessarilygenerate feasible offspring) most metaheuristics must be equipped with a constrainthandling technique. In simpler situations, repair techniques (Salcedo-Sanz 2009),special move operators (Schoenauer and Michalewicz 1996), or special decoders(Koziel and Michalewicz 1998) can be designed to ensure that all candidate solu-tions are feasible.

    We do not attempt to survey the current literature on constraint handling in thischapter, and the reader is referred to survey papers of, e.g., Michalewicz (1995),Michalewicz and Schoenauer (1996), Coello (2002), andMezura-Montes and Coello(2011) aswell as to the other chapters in this book. Insteadwe consider the oldest, andperhaps most general class of constraint handling methods: the penalty techniques,where infeasible candidate solutions have their fitness value reduced and are allowedto coexist and evolve with the feasible ones.

    Although conceptually simple, penalty techniques usually require user-definedproblem-dependent parameters, which often significantly impact the performance ofa metaheuristic.

    The main focus of this chapter is on adaptive penalty techniques, which auto-matically set the values of all parameters involved using feedback from the searchprocess without user intervention. This chapter presents a survey of the most rele-vant adaptive penalty techniques from the literature as well as a critical assessment oftheir assumptions, rationale for the design choices made, and reported performanceon test-problems.

    The chapter is structured as follows. Section1.2 summarizes the penalty method,Sect. 1.3 introduces the main taxonomy for strategy parameter control, and Sect. 1.4reviews some representative proposals for adapting penalty parameters. Section1.5presents a discussionof themainfindings and the chapter endswith someconclusions,including suggestions for further work in order to increase the understanding of suchadaptive techniques.

  • 1 A Critical Review of Adaptive Penalty Techniques in Evolutionary Computation 3

    1.2 The Penalty Method

    We consider in this chapter the constrained optimization problem consisting in theminimization of a given objective function f (x), where x Rn is the vector ofdecision/design variables, which are subject to inequality constraints gp(x) 0,p = 1, 2, . . . , p as well as equality constraints hq(x) = 0, q = 1, 2, . . . , q. In manyapplications the variables are also subject to bounds xLi xi xUi . However, this typeof constraint is usually trivially enforced in an EA and are not considered here. Theset of all feasible solutions are denoted by F , while d(x,F ) is a distance measureof the element x to the set F . The definition of d(x,F ) depends on the particularconstraint-handling strategy adopted and is specified for each strategy independently.

    The penalty method, which transforms a constrained optimization problem intoan unconstrained one, can be traced back at least to the paper by Courant (1943) inthe 1940s, and its adoption by the evolutionary computation community happenedvery soon.

    In this chapter, penalty techniques used within evolutionary computationmethods are classified as multiplicative or additive. In the multiplicative case, apositive penalty factor p(v(x), T) is introduced where v(x) denotes a measure of howconstraints are violated by the candidate solution x and T denotes a temperature.The idea is to amplify the value of the fitness function of an infeasible individual (ina minimization problem):

    F(x) = p(v(x), T) f (x).

    One would have p(v(x), T) = 1, for any feasible candidate solution x, andp(v(x), T) > 1 otherwise. Also, p(v(x), T) increases with the temperature T andwith the magnitude of the constraint violation v(x). An initial value for the tempera-ture is required together with the definition of a schedule for T such that T grows asthe evolution advances. This type of penalty has however receivedmuch less attentionin the EC community than the additive type. The most recent work seems to be byPuzzi and Carpinteri (2008), where the technique introduced by Yokota et al. (1995)and later modified in Gen and Cheng (1996), is also presented. Harrell and Ranjithan(1999) compare additive and multiplicative penalty techniques for an instance of thewatershed management problem.

    In the additive case, a penalty functional is added to the objective function inorder to define the fitness value of an infeasible element. They can be further dividedinto: (a) interior techniques, when a barrier functional B(x)which grows rapidlyas x F approaches the boundary of the feasible domainis added to the objectivefunction

    Fk(x) = f (x) + 1k

    B(x)

  • 4 H.J.C. Barbosa et al.

    and (b) exterior techniques, where a penalty functional is introduced

    Fk(x) = f (x) + kP(d(x,F )) (1.1)

    such that P(d(x,F )) = 0 if x is feasible (x F ) and P(.) > 0 otherwise.In both cases (a) and (b), under reasonable conditions, as k any limit point

    of the sequence {xk} of solutions of the unconstrained problem of minimizing Fk(x)is a solution of the original constrained problem (Luenberger and Ye 2008).

    In order to define d(x,F ) it is useful to define a measure of the violation of thejth constraint by a given candidate solution x Rn. One possibility is to take

    vj(x) ={ |hj(x)|, for an equality constraint,max{0,gj(x)} otherwise (1.2)

    However, the equality constraints hj(x) = 0 are often replaced by the inequalities|hj(x)| 0, for some small positive , and one would have

    vj(x) ={max{0, |hj(x)| }, for an equality constraint,max{0,gj(x)} otherwise (1.3)

    For computational efficiency the violations vj(x) are used to compute a substitute ford(x,F ) in the design of penalty functions that grow with the vector of violationsv(x) Rm, where m = p + q is the number of constraints to be penalized. At thispoint it is easy to see that interior penalty techniques, in contrast to exterior ones,require feasible solutions (which are often hard to find) thus explaining the highpopularity of the later.

    The most popular penalty function is perhaps (Luenberger and Ye 2008)

    P(x) =m

    j=1(vj(x))

    2 (1.4)

    where P(d(x,F )) is equal to the square of the Euclidean norm of v(x).Although it is conceptually easy to obtain the unconstrained problem, the defini-

    tion of good penalty parameter(s) is usually a time-consuming, problem dependent,trial-and-error process.

    One must also note that even if both the objective function f (x) and distance to thefeasible set F (usually based on the constraint violations vj(x)) are defined for allx, it is not possible to know in general which of the two given infeasible solutions iscloser to the optimum or should be operated upon or kept in the population. One canhave f (x1) > f (x2) and v(x1) = v(x2) or f (x1) = f (x2) and v(x1) > v(x2)and still have x1 closer to the optimum. Figure1.1 illustrates these situations.

  • 1 A Critical Review of Adaptive Penalty Techniques in Evolutionary Computation 5

    (a) (b)

    Fig. 1.1 Illustration of situations in which x1 is closer to the optimum (x) than x2 even when:a f (x1) = f (x2) and v(x1) > v(x2); or b f (x1) > f (x2) and v(x1) = v(x2)

    1.3 A Taxonomy

    In order to organize the large amount of penalty techniques available in the literatureCoello (2002) proposed the following taxonomy: (a) static penalty, (b) dynamicpenalty, (c) annealing penalty, (d) adaptive penalty, (e) co-evolutionary penalty, and(f) death penalty.We think however that the general definitions proposed byEiben andSmith (2003)with respect to theway strategy parameters are setwithinmetaheuristicsin general and evolutionary algorithms in particular can be naturally adopted here.

    Beyond the simplest static case where strategy parameters are defined by the userand remain fixed during the run, dynamic schemes have been also used where anexogenous schedule is proposed in order to define the strategy parameters at anygiven point in the search process. It is easy to see that if setting fixed parameters isnot trivial, defining the way they should vary during the run seems to be even harder.It is also felt that such strategy parameters should not be defined before the run butrather vary according to what is actually happening in the search process. This givesrise to the so-called adaptive techniques, where feedback from the search process isused to define the current strategy parameters.

    From the reasoning above, the death penalty can be included as a particular caseof static penalty, and the annealing penalty can be seen as a dynamic penalty scheme.Co-evolutionary penalty techniques are considered in Sect. 1.5.2.

    It should be noted here that the design of the adaptive mechanisms mentionedabove often involve meta-parameters or, at least, implicit design choices. The ratio-nale here is that such meta-parameters should be easier to set appropriately; prefer-ably fixed by the designer, with no posterior user intervention required. However,the parameter setting in some adaptive techniques can be as hard as in the case of thestatic ones (Coello 2002), contradicting the main objective of the adaptive penaltymethods.

    Finally, an even more ambitious proposal can be found in the literature: the self-adaptive schemes. In this case, strategy parameters are coded together with thecandidate solution, and conditions are created so that the evolutionary algorithm

  • 6 H.J.C. Barbosa et al.

    not only evolves increasingly better solutions but also better adapted strategy para-meters. With this increasing sophistication in the design of the algorithms one notonly seeks to improve performance but also to relieve the user from the task ofstrategy parameter setting and control.

    However, as will be shown in the next section, another possibility, which has notbeen contemplated in the taxonomy considered above, can be found in the literaturefor the task of automatically setting strategy parameters. The idea is to maintain anadditional population with the task of co-evolving such strategy parameters (herepenalty coefficients) along with the standard population evolving the solutions to theconstrained optimization problem at hand.

    1.4 Some Adaptive Techniques

    In this section some selected papers from the literature are reviewed in order toprovide an overview of the diversity of techniques proposed for automatically settingparameters involved in the various penalty schemes for constrained optimization.Such techniques not only intend to relieve the user from the task of parameter settingfor each new application but also to improve the final performance in the case at handby adapting the values of those parameters along the search process in a principledway. Table1.1 presents a summary of the adaptive penalty techniques cited in thissection. Some references are not included in the table as their work extends a previousone but do not require any additional information.

    The main lines of reasoning have been identified and a few representativeproposals of each line have been grouped together in the following subsections.

    1.4.1 The Early Years

    A procedure where the penalty parameters change according to information gatheredduring the evolution process was proposed by Bean and Alouane (1992). The fitnessfunction is again given by (1.1) but with the penalty parameter k = (t) adapted ateach generation by the following rules:

    (t + 1) =

    11 (t), if bi F for all t g + 1 i t2(t), if bi F for all t g + 1 i t(t) otherwise

    where bi is the best element at generation i, F is the feasible region, 1 = 2 and1, 2 > 1. In this method the penalty parameter of the next generation (t + 1)decreases when all best elements in the last g generations were feasible, increases ifall best elements were infeasible and otherwise remains without change.

  • 1 A Critical Review of Adaptive Penalty Techniques in Evolutionary Computation 7

    Table 1.1 Summary of the adaptive penalty techniques described here

    Reference Used information

    Bean and Alouane (1992) Feasibility of the best

    Coit et al. (1996) Degree of infeasibility

    Difference between the fitnesses of the best and bestfeasible individuals

    Hamida and Schoenauer (2000) Percentage of feasible individuals

    Ratio between the sum of the objective function valuesand constraint violations

    Nanakorn and Meesomklin (2001) Mean of the objective function values of the feasiblesolutions

    Beaser et al. (2011) Average of the objective function values

    Degree of infeasibility

    Barbosa and Lemonge (2003b) Average of the objective function values

    Lemonge and Barbosa (2004) Average of the violation values of each constraint

    Rocha and Fernandes (2009)

    Farmani and Wright (2003) Normalized violation values

    Objective function value of the worst solution

    Lin and Wu (2004) Percentage of feasible solutions with respect to eachconstraint

    Rate between the objective function value and a givenconstraint violation

    Fitness of the best solution

    Number of objective function evaluations

    Difference between the medians of the objective functionvalues of feasible and inFeasible solutions

    Ratio of the previous value and the median of theConstraint violations

    Tessema and Yen (2006, 2009) Percentage of feasible solutions

    Average of the normalized constraint violation values

    Normalized objective function value

    Wang et al. (2009) Degree of infeasibility

    Percentage of feasible solutions

    Gan et al. (2010) Percentage of Feasible solutions

    Costa et al. (2013) Degree of infeasibility

    Objective function value of the worst solution

    Constraint violation of the equality constraints for the bestsolution

    Vincenti et al. (2010) Objective function value of the best feasible solution

    Montemurro et al. (2013) Objective function value of the best infeasible solution

    Difference between the two previous values

    Ratio between the previous difference and the violationvalue of each constraint

  • 8 H.J.C. Barbosa et al.

    ThemethodproposedbyCoit et al. (1996), uses thefitness functionF(x)written as

    F(x) = f (x) + (Ffeas Fall)m

    j=1

    (dj(x,F )

    NFTj

    )Kj

    where f (x) is the unpenalized objective function for the solution x,Fall corresponds tothe best solution already found, Ffeas corresponds to the best feasible solution alreadyfound, and dj(x,F ) returns the distance between x and the feasible region (dependentof the problem). Kj and NFTj, the near-feasible threshold of the jth constraint, areuser-defined parameters.

    Rasheed (1998) proposed an adaptive penalty approach for handling constraintswithin a GA. The strategy required the user to set a relatively small penalty parameterand then it would increase or decrease it on demand as the optimization progresses.The method was tested in a realistic continuous-variable conceptual design of asupersonic transport aircraft, and the design of supersonic missile inlets, as well asin benchmark engineering problems. The fitness of each individual was based on thesum of an adequate measure of merit computed by a simulator (such as the take-offmass of an aircraft). If the fitness value is betweenV and 10V , whereV is a power of10, the penalty coefficient starts with the value V100 . The proposed algorithm featuredtwo points: (i) the individual that has the least sum of constraint violations and(ii) the individual that has the best fitness value. The penalty coefficient is consideredadequate if both individuals are the same and otherwise the penalty coefficient isincreased to make the two solutions have equal fitness values. The author concludedthat the idea of startingwith a relatively small initial penalty coefficient and increasingit or decreasing it on demandproved to be very good in the computational experimentsconducted.

    Hamida and Schoenauer (2000) proposed an adaptive scheme named as AdaptiveSegregational Constraint Handling Evolutionary Algorithm (ASCHEA) employing:(i) a function of the proportion of feasible individuals in the population; (ii) a seduc-tion/selection strategy to mate feasible and infeasible individuals applying a specificfeasibility-oriented selection operator, and (iii) a selection scheme to give advantagefor a given number of feasible individuals. The ASCHEA algorithm was improved(Hamida and Schoenauer 2002) by considering a niching technique with adaptiveradius to handle multimodal functions and also (i) a segregational selection that dis-tinguishes between feasible and infeasible individuals, (ii) a constraint-driven recom-bination,where in some cases feasible individuals can onlymatewith infeasible ones,and (iii) a population-based adaptive penalty method that uses global informationon the population to adjust the penalty coefficients. Hamida and Schoenauer (2002)proposed the following penalty function:

    P(x) =m

    j=1jvj(x) (1.5)

  • 1 A Critical Review of Adaptive Penalty Techniques in Evolutionary Computation 9

    where j is adapted as

    {j(t + 1) = j(t)/fact if (t(j) > target)j(t + 1) = j(t) fact otherwise (1.6)

    where fact > 1 and target are to be defined by the user (although the authors suggesttarget = 0.5), and t(j) is the proportion of individuals which do not violate thejth constraint. The idea is to have feasible and infeasible individuals on both sidesof the corresponding boundary. The adapted parameter j, with initial value j(0),are computed using the first population, trying to balance objective function andconstraint violations:

    j(0) = 1, ifni

    vj(xi) = 0j(0) =

    ni |f (xi)|ni |vj(xi)| 100, otherwise

    (1.7)

    The early proposals reviewed here were not able in general to adequately deal withthe problem, suggesting that more information from the search process, at the priceof added complexity, was required.

    1.4.2 Using More Feedback

    Nanakorn and Meesomklin (2001) proposed an adaptive penalty function for a GAthat is able to adjust itself during the evolutionary process. According to that method,the penalty is such that

    F(x) (t)favg forx / F (1.8)

    where favg represents the average fitness value of all feasible individuals in the currentgeneration and (t) depends on favg. Thus, the fitness function is defined as

    F(x) = f (x) (t)E(x) (1.9)

    where

    E(x) =m

    i=1vi(x) (1.10)

    The adaptive parameter (t) is written as

    (t) = max{0, max

    x/F

    [f (x) (t)favg

    E(x)

    ]}. (1.11)

  • 10 H.J.C. Barbosa et al.

    The function (t) is defined according to the user defined parameter . If 1 then

    (t) = Cfavg + Fmax ( 1) favg(C 1) favg (1.12)

    where C is a user defined parameter which is the maximum scaled fitness valueassigned to the best feasible member. The scaled fitness values are used only in theselection procedure and will not be described here.

    Otherwise (if < 1), then (t) is defined by an iterative process which is initial-ized with (t) = 1 and is repeated until the value of (t) becomes unchanging. Thesteps of the procedure are

    (i) to calculate by means of Eq. (1.11)(ii) to evaluate the candidate solutions according to Eq. (1.9)(iii) to obtain xmin and x, where Fmin = F (xmin) is the minimum value of F and

    x is the candidate solution that leads to

    (t) = F (x) (t) favgE (x)

    (1.13)

    (iv) (t) is updated by

    (t) = ( 1) E (xmin) F (x) + E (x)[F (xmin) + favg F (xmin)

    ]favg [E (x) + ( 1) E (xmin)]

    (1.14)

    Beaser et al. (2011) updates the adaptive penalty function theory proposed byNanakorn and Meesomklin (2001), expanding its validity beyond maximizationproblems to minimization as well. The expanded technique, using a hybrid geneticalgorithm, was applied to a problem in chemistry.

    The first modification was introduced in the Eq. (1.8):

    F(x) (t)favg forx / F (1.15)

    Then, the modified value for the parameter (t) is defined as

    (t) = min{0, min

    x/F

    [f (x) (t)favg

    E(x)

    ]}(1.16)

    An adaptive decision maker (ADM) proposed by Gan et al. (2010) is designedin the form of an adaptive penalty function. The method decides which individualis maintained in a Pareto optimal set and decides which individuals are going to bereplaced. The fitness function in this strategy is written as usual:

    F(x) = f (x) + C G(x). (1.17)

  • 1 A Critical Review of Adaptive Penalty Techniques in Evolutionary Computation 11

    A parameter rf is introduced denoting the proportion of feasible solutions in thepopulation and C is designed as a function of rf , i.e., C(rf ), and two basic rulesneed to be satisfied: (1) It should be a decreasing function, because the coefficient Cdecreases as rf increases and, (2) When rf varies from 0 to 1, C decreases sharplyfrom a large number at the early stage, and decreases slowly to a small number atthe late stage. The reason is that, with rf increasing (it means that there are moreand more feasible solutions in the population), the search emphasis should shift fromlow constraint violations to good objective function values quickly. The proposedfunction that satisfies these two rules is expressed as C(rf ) = 10(1rf ), where isa positive constant coefficient to be adjusted, and the fitness function is rewritten as

    F(x) = f (x) + 10(1rf ) G(x) (1.18)

    Besides, two properties are established: (1) the fitness assignment maps the two-dimensional vector into the real number space: in this way, it is possible tocompare the solutions in the Pareto optimal set, selecting which one is preferableand (2) the penalty coefficient C varies with the feasibility proportion of the currentpopulation and, if there are no feasible solutions in the population, this parameterwill receive a relatively large value in order to guide the population in the directionof the feasible space.

    The common need for user-defined parameters together with the difficulty offinding adequate parameter values for each new application pointed the way to thechallenge of designing penalty techniques which do not require such parameters.

    1.4.3 Parameterless Techniques

    A parameterless adaptive penalty scheme for GAs was proposed by Barbosa andLemonge (2003b), which does not require the knowledge of the explicit form of theconstraints as a function of the design variables and is free of parameters to be setby the user. In contrast with other approaches where a single penalty parameter isused for all constraints, an adaptive scheme automatically sizes the penalty parametercorresponding to each constraint along the evolutionary process. The fitness functionproposed is written as

    F(x) =

    f (x), if x is feasible,

    f (x) +m

    j=1kjvj(x), otherwise

    (1.19)

    The penalty parameter is defined at each generation by

    kj = |f (x)| vj(x)ml=1[vl(x)]2

    (1.20)

  • 12 H.J.C. Barbosa et al.

    where f (x) is the average of the objective function values in the current populationand vl(x) is the violation of the lth constraint averaged over the current population.

    The idea is that the values of the penalty coefficients should be distributed in a waythat those constraints that are more difficult to be satisfied should have a relativelyhigher penalty coefficient.

    With the proposed definition one can prove the following property: an individualwhose jth violation equals the average of the jth violation in the current populationfor all j, has a penalty equal to the absolute value of the average fitness function ofthe population.

    The performance of the APM was examined using test problems from the evolu-tionary computation literature aswell as structural engineering constrained optimiza-tion problems but the algorithm presented difficulties in solving some benchmarkproblems, for example, the functions G2, G6, G7 and G10 proposed by Michalewiczand Schoenauer (1996). That was improved in the conference paper (Barbosa andLemonge 2002),where f (x) in the definition of the objective function of the infeasibleindividuals in Eq. (1.19) was changed to

    f (x) ={

    f (x), if f (x) > f (x),f (x) otherwise (1.21)

    and f (x) is the average of the objective function values in the current population.The new version was tested (Lemonge and Barbosa 2004) in benchmark engineeringoptimization problems and in the GSuite (Michalewicz and Schoenauer 1996) witha more robust performance.

    The procedure proposed by Barbosa and Lemonge (2002), originally conceivedfor a generational GA, was extended to the case of a steady-state GA (Barbosa andLemonge 2003a), where, in each generation, usually only one or two new individ-uals are introduced in the population. Substantial modifications were necessary toobtain good results in a standard test-problem suite (Barbosa and Lemonge 2003a).The fitness function for an infeasible individual is now computed according to theequation:

    F(x) = H +m

    j=1kjvj(x) (1.22)

    where H is defined as

    H ={

    f (xworst) if there is no feasible element in the population,f (xbestFeasible) otherwise

    (1.23)

    and the penalty coefficients are redefined as

    kj = H vj(x)ml=1[vl(x)]2

    (1.24)

  • 1 A Critical Review of Adaptive Penalty Techniques in Evolutionary Computation 13

    Also, every time a better feasible element is found (or the number of new elementsinserted into the population reaches a certain level), H is redefined and all fitnessvalues are recomputed. The updating of each penalty coefficient is performed insuch a way that no reduction in its value is allowed. The fitness function value isthen computed using Eqs. (1.22)(1.24). It is clear from the definition of H in (1.23)that if no feasible element is present in the population, one is actually minimizinga measure of the distance of the individuals to the feasible set, since the actualvalue of the objective function is not taken into account. However, when a feasibleelement is found then it immediately enters the population as, after updating allfitness values using (1.19), (1.23), and (1.24), it becomes the element with the bestfitness value.

    Later, APM variants were introduced with respect to the definition of the penaltyparameter kj (Barbosa and Lemonge 2008). The APM, as originally proposed,computes the constraint violations in the initial population, and updates all penaltycoefficients, for each constraint, after a given number of offspring is inserted inthe population. A second variant, called sporadic APM with constraint violationaccumulation, accumulates the constraint violations during a given number of inser-tions of new offspring in the population, updates the penalty coefficients, and keepsthe penalty coefficients for the next generations. The APM with monotonic penaltycoefficients is the third variant, where the penalty coefficients are calculated as inthe original method, but no penalty coefficient is allowed to have its value reducedalong the evolutionary process. Finally, the penalty coefficients are defined by usinga weighted average between the previous value of a coefficient and the new valuepredicted by the method. This variant is called the APMwith damping. Besides that,these variants of the APM were extended to the steady-state GA and presented inLemonge et al. (2012).

    Rocha and Fernandes (2009) proposed alternative expressions for the APMpenalty coefficients

    kj =

    popi=1

    f (xi)

    pop

    i=1 vj(xi)mk=1

    popi=1 vk(xi)

    and also

    kj =

    popi=1

    f (xi)

    exp

    ( popi=1 vj(xi)m

    k=1pop

    i=1 vk(xi)

    )l 1

    with l {1, 2}.Farmani and Wright (2003) introduced a parameterless adaptive technique that

    uses information about the degree of infeasibility of solutions written as

    u(x) = 1m

    mj=1

    vj(x)

    vmaxj(1.25)

  • 14 H.J.C. Barbosa et al.

    where m is the total number of inequality and equality constraints, and vmaxjis the maximum value of the jth violation in the current population. The xworst of theinfeasible solutions is selected by comparing all infeasible individuals against thebest individual xbest. Two potential population distributions exist in relation to this:(i) if one or more of the infeasible solutions have an objective function value thatis lower than the f (xbest), the f (xworst) of the infeasible solutions is taken as theinfeasible solution having the highest infeasibility value and an objective functionvalue that is lower than the f (xbest) solution. If more than one individual existswith the same highest degree of infeasibility, then f (xworst) is taken as the solutionwith maximum infeasibility value and the lower of the objective function values,and (ii) when all of the infeasible solutions have an objective function value that isgreater than f (xbest). Thus, f (xworst) is identified as being the solution with the high-est degree of infeasibility value. Having more than one individual in the populationwith the same highest infeasibility value, then f (xworst) is taken as the solution withthe maximum infeasibility value and the higher of the objective function values. Thehighest objective function value in the current population to penalize the infeasibleindividuals is defined as fmax. The method is applied in two stages where the firststage considers the case where one or more infeasible solutions have a lower andpotentially better objective function (minimization problem) than the xbest solutionand

    (x| f (x) < f (xmax)(u(x) > 0.0)). A linear relationship between the degreeof infeasibility of the xbest and xworst is considered as

    u(x) = u(x) u(xworst)u(xbest) u(xworst) (1.26)

    Thus, the fitness function F1st(x), in the first stage, is written as

    F1st(x) = f (x) + u(x)(f (xmax) f (xworst)) (1.27)

    The second stage increases the objective function such that the penalized objectivefunction of the worst infeasible individual F2nd(x) is equal to the worst objectiveindividual (Eqs. (1.28) and (1.29)).

    F2nd(x) = F1st(x) + |F1st(x)|(exp(2.0u(x)) 1

    exp(2.0) 1)

    (1.28)

    and

    =

    f (xmax)f (xbest)f (xbest)

    , if f (xworst) f (xbest)0, if f (xworst) = f (xmax).f (xmax)f (xworst)

    f (xworst), if f (xworst) > f (xbest)

    (1.29)

    The scaling factor , is introduced to ensure that the penalized value of worstinfeasible solution is equivalent to the highest objective function value in the currentpopulation. = 0 (second case in Eq. (1.29)) is used when the worst infeasible

  • 1 A Critical Review of Adaptive Penalty Techniques in Evolutionary Computation 15

    individual has anobjective function value equal to the highest in the population. In thiscase, no penalty is applied since the infeasible solutions would naturally have a lowfitness and should not be penalized further. The use of absolute values of the fitnessfunction in Eq. (1.29) is considered since minimization of objective functions mayhave negative values. The use of absolute values of the fitness function in Eq. (1.29)is considered since minimization of objective functions may have negative values.

    A self-organizing adaptive penalty strategy (SOAPS) is presented in Lin and Wu(2004) featuring the following aspects: (1) The values of penalty parameters areautomatically determined according to the population distribution; (2) The penaltyparameter for each constraint is independently determined; (3) The objective andconstraint functions are automatically normalized; (4) No parameters need to bedefined by the user; (5) Solutions are maintained evenly distributed in both feasibleand infeasible parts of each constraint. The pseudo objective function defined bythe proposed algorithm is given as

    F(x) = f (x) + P(x) (1.30)

    where the penalty function P(x) is written as

    P(x) = 100 + t100

    1p + 2q

    mj=1

    rtj vj(x) (1.31)

    where t is the generation, rtj is the penalty parameter for the jth constraint at genera-tion t, and p and q are the number of inequality and equality constraints, respectively.

    The penalty parameter rtj for the jth constraint at the tth generation is set as

    rtj = rt1j (1 t(j) 0.5

    5

    ), t 1 (1.32)

    where t(j) is the percentage of feasible solutions with respect to the jth constraintat the tth generation. This parameter will be adapted during the evolutionary processand its initial value is set as

    r0j =QR1obj

    QR1conj(1.33)

    where QR1obj and QR1conj are the interquartile ranges of the objective function and

    the jth constraint function values, respectively, in the initial population.Although the proposed algorithm performed satisfactorily on constrained opti-

    mization problems with inequality constraints, it had difficulties in solving problemswith equality constraints. The authors presented in the same paper (Wu and Lin2004) a modification (with added complexity) of the first version of the algorithm.They detected that the initial penalty parameter for a constraint may become unde-sirably large due to the poor initial population distribution. A sensitivity analysis of

  • 16 H.J.C. Barbosa et al.

    the parameter r0j was done by the authors and they concluded that enlarged penaltiesundesirably occur because solutionswith these unexpected large constraint violationsare not evenly sampled in the initial population. The value for F(x) in the secondgeneration of SOAPS is written as

    F(x) ={

    f (x), if x Ff (x) (1 rGEN) + FBASE rGEN + P(x) otherwise (1.34)

    where FBASE means the minimum value of all feasible solutions or, in the absenceof them, the infeasible solutions with the smallest amount of constraint violation.The value of rGEN is given by the number of function evaluations performed so fardivided by the total number of function evaluations. The expression for P(x) is

    P(x) =

    j

    rqj vj(x) (1.35)

    The modified initial penalty coefficient is rewritten as

    r0j =

    med1obj,feasjmed1obj,infeasjmed1conj

    if med1obj,feasj med1obj,infeasj

    0.5 med1obj,infeasjmed1obj,feasj

    med1conjotherwise

    (1.36)

    where med1obj,feasj is the median of the objective function value of the feasible solu-

    tions, and med1obj,infeasj is the median of all infeasible solutions with respect to the

    jth constraint, in the initial population. The value med1conj represents the median ofall constraint violations of the jth constraint in the initial population. The value ofmedobj,feas, used in Eq. (1.36), is written as

    medobj,feas = med,feas = med,infeas= medobj,infeas + r medcon (1.37)

    where med,feas is the median of the pseudo-objective function values of feasi-ble designs, and med,infeas is the median of the pseudo-objective function valuesof infeasible designs. The latter, med,infeas consists of medobj,infeas the medianof objective function values of all infeasible designs and medcon, the median ofconstraint violations of all infeasible designs. The second generation of SOAPS wastested in two numerical illustrative problems and one engineering problem.

    Tessema and Yen (2006) proposed an adaptive penalty function for solving con-strained optimization problems using a GA. A new fitness value, called distancevalue, in the normalized fitness-constraint violation space, and two penalty valuesare applied to infeasible individuals so that the algorithm would be able to iden-tify the best infeasible individuals in the current population. The performance of thealgorithmwas tested on theG1 toG13 test-problems and the algorithmwas consideredable to find competitive results when compared with others from the literature.

  • 1 A Critical Review of Adaptive Penalty Techniques in Evolutionary Computation 17

    In (Tessema andYen 2009) an algorithm that aims to exploit infeasible individualswith low objective value and low constraint violation was proposed. The fractionof feasible individuals in the population is used to guide the search process eithertoward finding more feasible individuals or searching for the optimum solution. Theobjective function of all individuals in the current population will be evaluated first,and the smallest and the largest values will be identified as fmin and fmax, respectively.The fitness function of each individual is normalized as

    f (x) = f (x) fminfmax fmin (1.38)

    The normalized constraint violation of each infeasible individual is evaluated byEq. (1.25) and the modified fitness function is then written as

    F(x) =

    f (x), for a feasible solutionu(x), if there is no feasible ind.

    f (x)2 + u(x)2 +[(1 rf )u(x) + rf f (x)

    ], otherwise

    where rf [0, 1] is the fraction of feasible individuals in the population, and u(x) isthe average of the normalized violations (vj(x)).

    A hybrid evolutionary algorithm and an adaptive constraint-handling techniqueis presented by Wang et al. (2009). The hybrid evolutionary algorithm simul-taneously uses simplex crossover and two mutation operators to generate theoffspring population. The proposed method operates upon three types of population:(1) a population that contains only infeasible solutions, (infeasible situation), (2) apopulation that contains feasible and infeasible solutions, (semi-feasible situation),and (3) a population that contains only feasible solutions, (feasible situation).Denoting G(x) = mj=1 Gj(x) as the degree of constraint violation of the individualx, one has

    1. Infeasible situation: the constrained optimization problem is treated as a con-straint satisfaction problem. Thus, finding feasible solutions is the most impor-tant objective in this situation. To achieve this, the constraint violations G(x) ofthe individuals in the population, and the objective function f (x) is disregardedcompletely. First, the individuals in the parent population are ranked based ontheir constraint violations in ascending order, and then the individuals with theleast constraint violations are selected and form the offspring population.

    2. Semi-feasible situation: the population is divided into the feasible group K1 andthe infeasible group K2. After that, the best feasible xbest and the worst feasiblesolutions xworst are identified from the feasible group K1. Then, the objectivefunction f (x) of a candidate solution is written as

    f (xi) ={

    f (xi), if xi K1max {f (xbest) + (1 )f (xworst), f (xi)} if xi K2 (1.39)

  • 18 H.J.C. Barbosa et al.

    where is the proportion of feasible solutions in the last population P(t).The normalized objective function is obtained using the Eq. (1.38). Also, thenormalized constraints are written as

    G(xi) =

    0, if xi K1G(xi) min

    xK2G(x)

    maxxK2

    G(x) minxK2

    G(x) , if xi K2(1.40)

    If only one infeasible solution appears in the population, the normalized constraintviolation G of such individualwill always be equal to 0. To avoid it, the normalizedconstraint violation Gof such individual is set to a valueuniformly chosenbetween0 and1.Thefitness function is definedby adding the normalizedobjective functionvalues and constraint violations and defined as

    F(xi) = f (xi) + G(xi) (1.41)

    3. Feasible situation: in this case, the comparisons of individuals are based only onthe objective function f (x).

    Costa et al. (2013) proposed an adaptive constraint handling technique where thefitness function of an infeasible individual is defined as

    F(x) = fmax +m

    j=1vj(x) (1.42)

    and vj(x) is defined as in Eq. (1.3). An adaptive tolerance was introduced in orderto handle equality constraints. An initial tolerance 0 is defined and it is adaptivelyupdated along the evolutionary process,with a periodicity of generations, accordingto the expression:

    k+1 = k + (1 )Cbest2 (1.43)

    where is a smoothing factor, Cbest is the vector of equality constraints for the bestpoint in the population, and 2 is the Euclidean norm.

    A parameterless adaptive penalty technique used within a GA has been proposedin Vincenti et al. (2010), Montemurro et al. (2013) where the basic idea is that somegood infeasible individuals (in the sense of having good objective function values)can be useful to attract the exploration toward the boundary of the feasible domain,as the optimum usually has some active constraints. The penalty coefficients ci and qj(for equality and inequality constraints, respectively) are computed at each generationt as

    ci(t) =f Fbest f NFbest

    (gi)NFbesti = 1, . . . , q and qj(t) =

    f Fbest f NFbest(hj

    )NFbest

    j = 1, . . . , p(1.44)

  • 1 A Critical Review of Adaptive Penalty Techniques in Evolutionary Computation 19

    where the superscripts F and NF stand for feasible and non-feasible, respectively.f Fbest and f

    NFbest are the values of the objective function for the best individuals within

    the feasible and the infeasible sides of the domain, respectively, while (gi)NFbest and(hj

    )NFbest represent the violation of inequality and equality constraints, respectively,

    for the best infeasible solution.Individuals that are infeasible with respect to the kth constraint are grouped and

    ranked with respect to their objective function values: the objective function of thebest individual of such a group is f NFbest while the individuals that are feasible withrespect to the kth constraint are grouped and ranked with respect to their objectivefunction values: the objective function of the best individual of this group is f Fbest.

    When no feasible individuals are available in the population with respect to thekth constraint, the population is then sorted into two groups: individuals havingsmaller values of the kth constraint violation (10% of the population) are groupedas virtually feasible while the rest are grouped as infeasible and ranked in terms oftheir objective function values: the objective function of the best individual of sucha group is f NFbest.

    It is worth noting that the definition in Eq. (1.44) forces the value of the objectivefunction of the best infeasible individual to be equal to that of the best feasibleindividual. In the next section, further (perhaps less popular) ways of implementingpenalty techniques are briefly described.

    1.5 Related Techniques

    1.5.1 Self-adapting the Parameters

    The direct implementation of a standard self-adaptive penalty technique (followingEiben and Smith (2003)) would entail the encoding of one (or more) penaltycoefficients in the same chromosome where the candidate solution is encoded. Theyare then subject to the evolutionary process, undergoing recombination and mutationjust as the problem variables in the chromosome. However, evolution would dis-cover that the best strategy is to drive down all penalty coefficients of an individualto zerothus eliminating any reduction in the fitness of the corresponding candidatesolutionand actually finding the solution of the unconstrained problem (Eiben andSmith 2003).

    Eiben et al. (2000) proposed a scheme to prevent EAs from cheating whensolving constraint satisfaction problems (CSPs). When solving CSPs by means ofEAs, weights are associated with each constraint to add a penalty to the individualif that constraint is not satisfied. Changes in the weights along the run will causethe EA to put more pressure into the satisfaction of the corresponding constraint.Eiben et al. introduced a tournament selection that uses the maximum of each ofthe weights, across all competitors, as a way to eliminate cheating in the CSP case,without resorting to any feedbackmechanism from the search process.Unfortunately,

  • 20 H.J.C. Barbosa et al.

    to the best of our knowledge, no strict self-adaptive technique has been applied sofar to constrained optimization problems in Rn.

    1.5.2 Coevolving the Parameters

    Coello (2000) introduced a co-evolutionary algorithm to adapt the penalty coef-ficients of a fitness function in a GA with two populations P1 (size M1) and P2(size M2). The fitness function is written as

    F(x) = f (x)k (sum_viol(x) w1 + num_viol(x) w2) (1.45)

    where w1 and w2 are two (integer) penalty coefficients, and sum_viol(x) andnum_viol(x) are, respectively, the sum of the violations and the number of constraintswhich are violated by the candidate solution x. The second of these populations, P2,encodes the set of weight combinations (w1 and w2) that will be used to computethe fitness value of the candidate solutions in P1 whereas P2 contains the penaltycoefficients that will be used in the fitness function evaluation. Benchmark problemsfrom the literature, especially mechanical engineering optimization, are used in thenumerical tests but only inequality constraints were considered in the experiments.

    The co-evolutionary idea was also analyzed in He and Wang (2007) and He et al.(2008). In these works, the penalty factors are adapted by a co-evolutionary particleswarm optimization approach (CPSO). Two kinds of swarms are used in He andWang (2007) and He et al. (2008): one population of multiple swarms is used tosolve the search problem and other one is responsible to adapt the penalty factors.Each particle j in the second population represents the penalty coefficients for a setof particles in the first one. The two populations evolve by a given G1 and G2 numberof generations. The adopted fitness function is the one proposed by Richardson et al.(1989), where not only the amount of violation contributes to the quality of a givencandidate solution but also the number of of violated constraints. According to Heand Wang (2007) and He et al. (2008),

    Fj(x) = f (x) + sum_viol(x) wj,1 + num_viol(x) wj,2,

    where f (x) is the objective function value, and wj,1 and wj,2 are the penalty coeffi-cients from the particle j in the second swarm population. The penalty factors wj,1and wj,2 are evolved according to the following fitness:

    G(j) =

    sum_feasnum_feas num_feas, if there is at least one feasible solution in the subsetmax(Gvalid) +

    popi=1 sum_viol(xi)popi=1 num_viol(xi)

    popi=1 num_viol(xi), otherwise,where sum_feas denotes the sum of objective function values of feasible solutions,num_feas is the number of feasible individuals, andmax(Gvalid)denotes the maximum

  • 1 A Critical Review of Adaptive Penalty Techniques in Evolutionary Computation 21

    G over all valid particles; the valid particles are those oneswhich operate over a subsetof particles where there is at least one feasible solution.

    1.5.3 Using Other Tools

    It is interesting to note that, despite all the effort that has been devoted to the researchof penalty techniques in the context of nature inspired metaheuristics in the last 20years or so, the subject still draws the attention of the researchers, and new tools arebeing constantly introduced to this arena. Fuzzy logic and rough set theory are justtwo recent examples that will be mentioned in the following.

    Wu etal. (2001) proposed a fuzzy penalty function strategy using informationcontained in individuals. The fitness function of an infeasible individual is

    F(x) = f (x) + rG(x) (1.46)

    where G(x) is the amount of constraint violation from inequality and equality con-straints, and r is the penalty coefficient.

    f and G are taken as fuzzy variables with the corresponding linguistic values suchas very large, large, small, very small, etc. The ranges for f and G are defined byDf =

    [fmin, fmax

    ]and DG = [Gmin, Gmax]. Those ranges must then be partitioned

    which is a problem dependent, non-trivial taskand linguistic values are associatedwith each part. The sets A and B are introduced as fuzzy sets for f and G, respectively,and rk, k = 1, . . . , l is defined as a fuzzy singleton for r which is inferred fromappropriate membership functions and finally used in (1.46).

    In their numerical experiments, three partitions were used for both f and G withtriangle membership functions, and five points were used for the output. The rulebase contained 9 rules in the form

    If f is Ai and G is Bj then r = rk .

    Lin (2013) proposed perhaps the first constraint-handling approach which appliesthe information granulation of rough set theory to address the indiscernibility relationamongpenalty coefficients in constrained optimization.Adaptive penalty coefficientsfor each constraint wtk , k = 1, . . . , m were defined in a way that a high penalty isassigned to the coefficient of themost difficult constraint. In addition, the coefficientsare also depended on the current generation number t. Using the standard definitionfor the violation of the jth constraint (vj(x)), the fitness function reads as

    F(x) = f (x) +m

    j=1wtkv

    2j (x)

  • 22 H.J.C. Barbosa et al.

    wherewtk = (Ct)(k,t) andC is a severity factor. The exponent(k, t), initializedas (k, 0) = 2 for all k, is defined as

    (k, t) ={

    (k, t 1) k, if k = 1(k, t 1) if k = 0

    according to the discernible mask and the representative attribute value k of thesuperior class Xgood (see the paper for details). If the kth constraint is discernible(i.e., k = 1), the exponent (k, t) is adjusted by the representative attribute value(k); otherwise, the exponent retains the same value as in the previous generation.

    1.6 Discussion

    1.6.1 User-Defined Parameters

    Someof the proposals considereddonot require from theuser the definition of penaltyparameters, and can as such be considered parameterless. This is very useful forthe practitioner. However, it should be noted that essentially all proposals do embodysome fixed values that are hidden from the user and, as a result, cannot be changed.Furthermore, all proposals involve design decisions which were madewith vari-able level of justificationand incorporated into the definition of the technique. Itseems natural to assume that some of those could possibly be changeda researchopportunityleading to improved results.

    1.6.2 Comparative Performance

    In order to test the performance of a constraint handling technique, several test-problems have been used over the years. The most popular suite of continuousconstrained optimization problems is that containing the 24 problems used for thecompetition held during the 2006 IEEE Congress on Evolutionary Computationwhich are described in Liang et al. (2006). Later, larger problems were consideredin another competition, held during the 2010 edition of the same conference. Thedetails can be found in Mallipeddi and Suganthan (2010).

    It can be noticed that the claims concerning the performance of each proposal inthe papers reviewed have been deliberately omitted. This is due to several factors.Oneof them is that a statistical study in order to assure a possible statistically significantsuperiority of the proposed technique over others from the literature is often missing.Another criticism is that often the claimed superiority of the proposed technique canonly be observed after the fourth or fifth significant digit of the final results, withno consideration for the facts (i) that the original model itself may not have suchaccuracy, and (ii) that the compared solutions may be indistinguishable from thepractical point of view.

  • 1 A Critical Review of Adaptive Penalty Techniques in Evolutionary Computation 23

    Another major issue that makes it impossible to rigorously assess the relativeperformance of the adaptive penalty techniques (APTs) reviewed is that the finalresults depend not only on the penalty technique considered but also on the searchengine (SE) adopted. The competing results often derive from incomparable arrange-ments such as APT-1 embedded in SE-1 (a genetic algorithm, for instance) versusAPT-2 applied to SE-2 (an evolution strategy, for instance). The results using stochas-tic ranking (SR) within an evolution strategy (ES) (Runarsson and Yao 2000) wereshown to outperform APM embedded in a binary-coded genetic algorithm (GA)(Lemonge and Barbosa 2004) when applied to a standard set of benchmark con-strained optimization problems in Rn. This seems to be dueat least in partto thefact that theESadoptedperformsbetter in this continuous domain than a standardGA.A proper empirical assessment of the constraint handling techniques considered(SR versus APM) should be performed by considering settings such as (SR+GA ver-sus APM+GA) and (SR+ES versus APM+ES). An attempt to clarify this particularquestion is presented by Barbosa et al. (2010b). It is clear that there is a need formore studies of this type in order to better assess the relative merits of the proposalsreviewed here.

    The standard way of assessing the relative performance of a set A of naalgorithms ai, i {1, . . . , na}, is to define a set P of np representative problems pj,j {1, . . . , np}, and then test all algorithms against all problems, measuring theperformance tp,a of algorithm a A when applied to problem p P.

    In order to evaluate tp,a one can alternatively (i) define a meaningful goal(say, level of objective function value) and then measure the amount of resources(say, number of function evaluations) required by the algorithm to achieve that goal,or (ii) fix a given amount of resources to be allocated to each algorithm and thenmeasure the goal attainment.

    Considering that tp,a is the CPU time spent by algorithm a to reach the stated goalin problem p a performance ratio can be defined as

    rp,a = tp,amin{tp,a : a A} . (1.47)

    Although each tp,a or rp,a is worth considering by itself, one would like to be ableto assess the performance of the algorithms in A on a large set of problems P in auser-friendly graphical form. This has been achieved by Dolan andMor (2002) whointroduced the so-called performance profiles, an analytical tool for the visualizationand interpretation of the results of benchmark experiments. For more details and anapplication in the constrained optimization case, see Barbosa et al. (2010a).

    One has also to consider that it is not an easy task to define a set P which isrepresentative of the domain of interest, as one would like P (i) to span the targetproblem-space and, at the same time, (ii) to be as small as possible, in order toalleviate the computational burden of the experiments. Furthermore, it would alsobe interesting to assess the relative performance of the test-problems themselveswith respect to the solvers. Are all test-problems relevant to the final result? Aresome test-problems too easy (or too difficult) so that they do not have the ability to

  • 24 H.J.C. Barbosa et al.

    discriminate the solvers? Efforts in this direction, exploring the performance profileconcept, were attempted in Barbosa et al. (2013).

    1.6.3 Implementation Issues

    Although not always considered in the papers reviewed, the simplicity of the tech-nique (both conceptually and from the implementation point of view) is relevant. Itseems quite desirable that the proposed technique could be easily implemented asan additional module to any existing metaheuristic for unconstrained optimizationwith aminimum interference with the current code. In this respect, techniques resort-ing to coevolution would typically require another population, an additional set ofparameters, and would lead to more interference and modifications to the originalcode.

    1.6.4 Extensions

    It seems natural to expect that most of, if not all, the proposals reviewed here canbe easily extended to the practically important case of constrained multi-objectiveoptimization. Although papers presenting such extension have not been reviewedhere, it seems that there is room, and indeed a need, to explore this case.

    The same can perhaps be said of the relevant case of mixed (discrete andcontinuous) decision variables, as well as the more complex problem of constrainedmulti-level optimization.

    1.7 Conclusion

    This chapter presented a review of the main adaptive penalty techniques availablefor handling constraints within nature inspired metaheuristics in general and evolu-tionary techniques in particular. The main types of evidence taken from the searchprocess in order to inform the decision-making process of continuously adapting therelevant parameters of the penalty technique have been identified.

    As the different adaptive techniques have not been implemented on a singlegiven search engine, the existing comparative studies, which are usually based onthe final performance on a set of benchmark problems, are not very informative of therelative performance of each penalty technique, as the results are also affected by thedifferent search engines adopted in each proposal. The need for better comparativestudies investigating the relative performance of the different adaptive techniqueswhen applied within a single search engine in larger and more representative sets ofbenchmark problems are also identified.

  • 1 A Critical Review of Adaptive Penalty Techniques in Evolutionary Computation 25

    Acknowledgments The authors thank the reviewers for their comments, which helped improvethe quality of the final version, and acknowledge the support from CNPq (grants 308317/2009-2,310778/2013-1, 300192/2012-6 and 306815/2011-7) and FAPEMIG (grant TEC 528/11).

    References

    Barbosa HJC, Lemonge ACC (2002) An adaptive penalty scheme in genetic algorithms for con-strained optimization problems. In: Langdon WB, Cant-Paz E, Mathias KE, Roy R, Davis D,Poli R, Balakrishnan K, Honavar V, Rudolph G, Wegener J, Bull L, Potter MA, Schultz AC,Miller JF, Burke EK (eds) Proceedings of the genetic and evolutionary computation conference(GECCO). Morgan Kaufmann, San Francisco

    Barbosa HJC, Lemonge ACC (2003a) An adaptive penalty scheme for steady-state genetic algo-rithms. In: Cant-Paz E, Foster JA, DebK, Davis LD, Roy R, OReilly U-M, Beyer H-G, StandishR, Kendall G, Wilson S, Harman M, Wegener J, Dasgupta D, Potter MA, Schultz AC, DowslandKA, Jonoska N, Miller J (eds) Genetic and evolutionary computation (GECCO). Lecture Notesin Computer Science. Springer, Berlin, pp 718729

    Barbosa HJC, Lemonge ACC (2003b) A new adaptive penalty scheme for genetic algorithms. InfSci 156:215251

    Barbosa HJC, Lemonge ACC (2008) An adaptive penalty method for genetic algorithms in con-strained optimization problems. Front Evol Robot 34:934

    Barbosa HJC, Bernardino HS, Barreto AMS (2010a) Using performance profiles to analyze theresults of the 2006 CEC constrained optimization competition. In: 2010 IEEE congress on evo-lutionary computation (CEC), pp 18

    Barbosa HJC, Lemonge ACC, Fonseca LG, Bernardino HS (2010b) Comparing two constrainthandling techniques in a binary-coded genetic algorithm for optimization problems. In: Deb K,Bhattacharya A, Chakraborti N, Chakroborty P, Das S, Dutta J, Gupta SK, Jain A, Aggarwal V,Branke J, Louis SJ, Tan KC (eds) Simulated evolution and learning. Lecture Notes in ComputerScience. Springer, Berlin, pp 125134

    Barbosa HJC, Bernardino HS, Barreto AMS (2013) Using performance profiles for the analysisand design of benchmark experiments. In: Di Gaspero L, Schaerf A, Stutzle T (eds) Advances inmetaheuristics. Operations Research/computer Science Interfaces Series, vol 53. Springer, NewYork, pp 2136

    Bean J, Alouane A (1992) A Dual Genetic Algorithm For Bounded Integer Programs. Techni-cal Report Tr 92-53, Department of Industrial and Operations Engineering, The University ofMichigan

    Beaser E, Schwartz JK, Bell CB, Solomon EI (2011) Hybrid genetic algorithm with an adaptivepenalty function for fitting multimodal experimental data: application to exchange-coupled non-Kramers binuclear iron active sites. J Chem Inf Model 51(9):21642173

    Coello CAC (2000) Use of a self-adaptive penalty approach for engineering optimization problems.Comput Ind 41(2):113127

    Coello CAC (2002) Theoretical and numerical constraint-handling techniques used with evolu-tionary algorithms: a survey of the state of the art. Comput Methods Appl Mech Eng 191(1112):12451287

    Coit DW, Smith AE, Tate DM (1996) Adaptive penalty methods for genetic optimization of con-strained combinatorial problems. INFORMS J Comput 8(2):173182

    Costa L, Santo IE, Oliveira P (2013) An adaptive constraint handling technique for evolutionaryalgorithms. Optimization 62(2):241253

    Courant R (1943) Variational methods for the solution of problems of equilibrium and vibrations.Bull Am Math Soc 49:123

  • 26 H.J.C. Barbosa et al.

    Dolan E, Mor JJ (2002) Benchmarking optimization software with performance profiles. MathProgram 91(2):201213

    Eiben AE, Smith JE (2003) Introduction to evolutionary computing. Springer, New YorkEiben AE, Jansen B,Michalewicz Z, Paechter B (2000) Solving CSPs using self-adaptive constraint

    weights: how to prevent EAs from cheating. In: Whitley, LD (ed) Proceedings of the genetic andevolutionary computation conference (GECCO). Morgan Kaufmann, San Francisco, pp 128134

    Farmani R, Wright J (2003) Self-adaptive fitness formulation for constrained optimization. IEEETrans Evol Comput 7(5):445455

    Gan M, Peng H, Peng X, Chen X, Inoussa G (2010) An adaptive decision maker for constrainedevolutionary optimization. Appl Math Comput 215(12):41724184

    Gen M, Cheng R (1996) Optimal design of system reliability using interval programming andgenetic algorithms. Comput Ind Eng, (In: Proceedings of the 19th international conference oncomputers and industrial engineering), vol 31(12), pp 237240

    HamidaH, SchoenauerM (2000)Adaptive techniques for evolutionary topological optimumdesign.In: Parmee I (ed) Proceedings of the international conference on adaptive computing in designand manufacture (ACDM). Springer, Devon, pp 123136

    Hamida S, Schoenauer M (2002) ASCHEA: new results using adaptive segregational constrainthandling. In: Proceedings of the IEEE service center congress on evolutionary computation(CEC), vol 1. Piscataway, New Jersey, pp 884889

    Harrell LJ, Ranjithan SR (1999) Evaluation of alternative penalty function implementations in awatershed management design problem. In: Proceedings of the genetic and evolutionary compu-tation conference (GECCO), vol 2. Morgan Kaufmann, pp 15511558

    He Q, Wang L (2007) An effective co-evolutionary particle swarm optimization for constrainedengineering design problems. Eng Appl Artif Intell 20(1):8999

    He Q, Wang L, zhuo Huang F (2008) Nonlinear constrained optimization by enhanced co-evolutionary PSO. In: IEEE congress on evolutionary computation, CEC 2008. (IEEE WorldCongress on Computational Intelligence), pp 8389

    Hughes T (1987) The finite element method: linear static and dynamic finite element analysis.Prentice Hall Inc, New Jersey

    Koziel S, Michalewicz Z (1998) A decoder-based evolutionary algorithm for constrained parameteroptimization problems. In: Eiben A, Bck T, SchoenauerM, Schwefel H-P (eds) Parallel problemsolving from nature (PPSN). LNCS, vol 1498. Springer, Berlin, pp 231240

    Krempser E, Bernardino H, Barbosa H, Lemonge A (2012) Differential evolution assisted by surro-gate models for structural optimization problems. In: Proceedings of the international conferenceon computational structures technology (CST). Civil-Comp Press, p 49

    LemongeACC,BarbosaHJC (2004)An adaptive penalty scheme for genetic algorithms in structuraloptimization. Int J Numer Methods Eng 59(5):703736

    Lemonge ACC, Barbosa HJC, Bernardino HS (2012) A family of adaptive penalty schemes forsteady-state genetic algorithms. In: 2012 IEEE congress on evolutionary computation (CEC).IEEE, pp 18

    Liang J, Runarsson TP,Mezura-Montes E, ClercM, Suganthan P, Coello CC,DebK (2006) Problemdefinitions and evaluation criteria for the CEC 2006 special session on constrained real-parameteroptimization. Technical report, Nanyang Technological University, Singapore

    Lin C-H (2013) A rough penalty genetic algorithm for constrained optimization. Inf Sci 241:119137

    Lin C-Y, Wu W-H (2004) Self-organizing adaptive penalty strategy in constrained genetic search.Struct Multidiscip Optim 26(6):417428

    Luenberger DG, Ye Y (2008) Linear and nonlinear programming. Springer, New YorkMallipeddi R, Suganthan PN (2010) Problem definitions and evaluation criteria for the CEC 2010

    competition on constrained real-parameter optimization. Technical report, Nanyang Technolog-ical University, Singapore

    Mezura-Montes E, Coello CAC (2011) Constraint-handling in nature-inspired numerical optimiza-tion: past, present and future. Swarm Evol Comput 1(4):173194

  • 1 A Critical Review of Adaptive Penalty Techniques in Evolutionary Computation 27

    Michalewicz Z (1995) A survey of constraint handling techniques in evolutionary computationmethods. In: Proceedings of the 4th annual conference on evolutionary programming. MIT Press,pp 135155

    Michalewicz Z, SchoenauerM (1996) Evolutionary algorithms for constrained parameter optimiza-tion problems. Evol Comput 4(1):132

    MontemurroM, Vincenti A, Vannucci P (2013) The automatic dynamic penalisationmethod (ADP)for handling constraints with genetic algorithms. Comput Methods Appl Mech Eng 256:7087

    Nanakorn P,Meesomklin K (2001) An adaptive penalty function in genetic algorithms for structuraldesign optimization. Comput Struct 79(2930):25272539

    Puzzi S, Carpinteri A (2008) A double-multiplicative dynamic penalty approach for constrainedevolutionary optimization. Struct Multidiscip Optim 35(5):431445

    Rasheed K (1998) An adaptive penalty approach for constrained genetic-algorithm optimization.In: Koza J, Banzhaf W, Chellapilla K, Deb K, Dorigo M, Fogel D, Garzon M, Goldberg D,Iba H, Riolo R (eds) Proceedings of the third annual genetic programming conference. MorganKaufmann, San Francisco, pp 584590

    Richardson JT, Palmer MR, Liepins GE, Hilliard M (1989) Some guidelines for genetic algorithmswith penalty functions. In: Proceedings of the international conference on genetic algorithms.Morgan Kaufmann, San Francisco, pp 191197

    Rocha AMAC, Fernandes EMDGP (2009) Self-adaptive penalties in the electromagnetism-likealgorithm for constrained global optimization problems. In: Proceedings of the 8thworld congresson structural and multidisciplinary optimization, Lisbon, Portugal

    Runarsson T, Yao X (2000) Stochastic ranking for constrained evolutionary optimization. IEEETrans Evol Comput 4(3):284294

    Salcedo-Sanz S (2009) A survey of repair methods used as constraint handling techniques in evo-lutionary algorithms. Comput Sci Rev 3(3):175192

    Schoenauer M, Michalewicz Z (1996) Evolutionary computation at the edge of feasibility. In:Proceedings of parallel problem solving from nature (PPSN). LNCS, Springer, pp 245254

    Tessema B, Yen GG (2006) A self adaptive penalty function based algorithm for constrained opti-mization. In: IEEE congress on evolutionary computation, CEC 2006. IEEE, pp 246253

    Tessema B, Yen G (2009) An adaptive penalty formulation for constrained evolutionary optimiza-tion. IEEE Trans Syst, Man Cybern, Part A: Syst Hum 39(3):565578

    Vincenti A, Ahmadian MR, Vannucci P (2010) BIANCA: a genetic algorithm to solve hard com-binatorial optimisation problems in engineering. J Glob Optim 48(3):399421

    Wang Y, Cai Z, Zhou Y, Fan Z (2009) Constrained optimization based on hybrid evolutionaryalgorithm and adaptive constraint-handling technique. Struct Multidiscip Optim 37(4):395413

    Wu B, Yu X, Liu L (2001) Fuzzy penalty function approach for constrained function optimiza-tion with evolutionary algorithms. In: Proceedings of the 8th international conference on neuralinformation processing. Citeseer, pp 299304

    Wu W-H, Lin C-Y (2004) The second generation of self-organizing adaptive penalty strategy forconstrained genetic se