Upload
ying
View
212
Download
0
Embed Size (px)
Citation preview
Performance Test Functions of Genetic Algorithm
Wenxia YUNa, Yongjie MAb and Ying Chenc
College of Physics and Electronic Engineering, NorthWest Normal University, Lanzhou, China
Keywords: genetic algorithm; performance metrics; test functions; constrained optimization; multimodal optimization
Abstract. Due to the disadvantages of genetic algorithm such as the weaker ability for local search,
premature convergence, random walk and problems related, and so on , the design and improvement
of the algorithm is an important research direction of genetic algorithm. And evaluating the
performance of algorithm systematically and scientifically is the key to test algorithm whether good or
bad .The common method used to evaluate algorithm is test function, however, the existing literature
on the optimization algorithm has different methods to evaluate the performance of algorithm, and
there is no uniform test criteria. As for those questions above, This paper studies test functions of
genetic algorithm, and analyses characteristics of the main test functions, which can be used as the
basis of selection algorithm test functions.
Introduction
Genetic Algorithm is a random search and optimization method based on biological natural selection
and genetic mechanism, which was created in 1975 by Professor Holland and his students [1]. Since
the mid 1980s, it has caused universal attention in the field of artificial intelligence. Since the 1990s,
the algorithm has become a hot research topic in the field of computer, information, and optimization
science. However, the genetic algorithm has many defects, such as the weaker ability for local search,
premature convergence, random walk, which hinder the promotion and application of genetic
algorithm. Then how to improve search ability and the convergence speed of the genetic algorithm,
have to make it better applied to solve the problem of the practical, are the main issues which
researchers have been always exploring.
The improved algorithm can overcome the defeats of standard genetic algorithm, and promote its
performance in solving a variety of optimization and optimal design problems. How to evaluate or test
its performance, the most direct way is to solve the specific optimization problems. But, the actual
optimization problems have a wide range of manifestations, contents and complexities. If they are
abstracted as numerical function optimization problems, the difference among manifestations,
contents and complexities is equal to the difference among function arguments, the continuity, the
concavity and convexity of functions, and the number of the peak. Therefore, the test functions
become one of the simple and easy methods, researchers have designed a number of test functions to
detect the algorithm performance.
Performance metrics of genetic algorithm
In real-world, genetic algorithm include multiple operators and control parameters, the improvement
of operators and the selection of the control parameters have great influence on search efficiency and
the accuracy of the final solution. How to evaluate the effect of the operator improvement and
parameters selection on algorithm performance, researchers proposed many performance evaluation
criteria.
In the 1970s, DeJong[2] proposed online and offline performance for comparison criterion
between the different genetic algorithm, which used by [3]. The evaluation criteria DeJong proposed,
which only consider the convergence rate characteristic of genetic algorithms, and don’t consider the
Applied Mechanics and Materials Vols. 278-280 (2013) pp 1334-1337Online available since 2013/Jan/11 at www.scientific.net© (2013) Trans Tech Publications, Switzerlanddoi:10.4028/www.scientific.net/AMM.278-280.1334
All rights reserved. No part of contents of this paper may be reproduced or transmitted in any form or by any means without the written permission of TTP,www.ttp.net. (ID: 142.150.190.39, University of Toronto Library, Toronto, Canada-14/03/13,17:06:11)
convergence instability of multiple running. Thus, [3] proposed “The average truncated generation”
and “The distribution entropy of truncated generation”. The performance analysis matrix was built
and used to evaluate the performance of algorithm by calculating the changes of the individual
probability density function[4]. A new evaluation criterion based on mean error(ME) and standard
deviation of error (SDE) was proposed[5]. “The average of best” and “the standard deviation of best”
were proposed to evaluate the performance of genetic algorithms in noisy environment[6]. To
evaluate the convergence of the obtained Pareto front by using an objective subset, the inverted
generational distance (IGD) was used[7]. Due to the drawback of IGD evaluation method, a
performance evaluation metric based on the change Pareto domination ratio(CDR) was proposed[8].
In addition, the Error Ratio (ER) was proposed[9], which simply counts the number of solutions in
PFknown which are not members of PFtrue .
Performance evaluation function of genetic algorithm
One of the fundamental issues when designing an algorithm is to have a standard methodology to
validate it. As part of this methodology, certain benchmark test functions are required. These test
problems should be simple, easy to describe, easy to analyze, scalable to any number of decision
variables, tunable to their parameters and also able to introduce controlled difficulties in both
converging to the Pareto-optimal front and maintaining a widely distributed set of solutions.
Performance test functions of genetic algorithms are generally some complex unimodal and
multimodal functions. The unimodal functions are to test the algorithms local search ability and
mainly used to test optimization algorithm convergence speed. Multimodal functions are to test the
algorithm comprehensive ability and mainly used to test the ability to search the global optimal
solution and get rid of local optimal solution[10]. In the existing literature, there are many functions
used to test the algorithm performance, which are mainly classified into three kinds: single objective
optimization test functions, multi-objective optimization test functions, multimodal test functions.
Single objective optimization problems test set. Single-objective constrained optimization
problems have been seen growing interest in the field of intelligent evolutionary computation, and
they are widely existed in real-world, so test of single-objective optimization problems should be
attracted considerable attention. To evaluate the single-objective constrained problems, thirteen
standard test functions (g01~g13) were introduced in literature[11], which can evaluate the
algorithms’ ability to handle constrained optimization.
In addition to border restrictions, unconstrained optimal problems have no restraint for value of
problem variables. Although the vast majority of practical optimization problems have constraint
conditions that must be met, unconstrained optimization problems are basis to further study
optimization problem. Some classic test functions used for evaluating unconstrained optimization are
listed in this section.
A generic evolutionary algorithm for single and multi-objective optimization was proposed[12],
the efficiency of the proposed algorithm in solving various problems was demonstrated on a number
of test problems. Spherical and Quadric function were mentioned[13], which are single extreme value
function. In addition, there are some widely used functions, such as Rosenbrock function, Griewank
function[14].
Multiple objective optimization problems test set. In principle, multi-objective optimization is
very different from the single-objective optimization. In single objective optimization, one attempt to
obtain the best design or decision, which is usually the global minimum or the global maximum
depending on the optimization problem is that of minimization or maximization. In the case of
multiple objectives, there may not exist one solution which is best (global minimum or maximum)
with respect to all objectives[15].
Test functions of integrated multi-objective optimization should contain multi-objective
optimization problems. A number of test functions of existing literature are presented in this section,
which are widely used by researchers.
Applied Mechanics and Materials Vols. 278-280 1335
The performance measurements of well-known multi-objective evolutionary algorithms in
MOEAT are done by benchmark problems[16].To evaluate the performance of the proposed
algorithm, four test functions were adopted in [17]. Zitzler et al. followed these guidelines and
suggested six test problems[18], which were attractive for many researchers to evaluate the
performance of their newly proposed approaches. To demonstrate the efficiency of the algorithm on
multi-objective test problems, [12] chose a number of two-objective test problems and three-objective
test problems.
Multimodal function optimization. The problem of multiple solutions is multimodal, and contains
lots of extreme values, which may include more than one global and some local extreme values, or
only a global and more than one local extreme values. The purpose of multiple solutions optimization
methods is to find these extreme values as much as possible.
To evaluate the performance of the proposed NABC algorithm, five standard test functions, which
widely adopted in the field of multimodal function optimization, are chosen[19]. To demonstrate the
efficiency of the coordinate multi-population GA, [20] chose two classic multimodal functions. [21]
chose three classic multimodal functions (solving the maximum) to simulate test for testing the
performance of algorithm.
Besides the functions listed above, other multimodal functions can be used for testing multimodal
function optimization problems.
Conclusions
The purpose of selection problem test sets is to evaluate the performance and efficiency of the
algorithm. When evaluating the performance of an algorithm, how to select the appropriate test
problems, it primarily depends on designing purpose and practical problems of the algorithm. To
conveniently and systematically evaluate an algorithm, some methods are proposed, such as, Deb[22]
presented test problem generator, which can automatically generate a set of test functions; [23]
analyzes MOPs in terms of the constraint condition, the uniform representation of the Pareto-optimal
front and the ability to reach the true Pareto-optimal front. For each item, test problems were
developed for each aspect.
As a simple and effective test method of algorithms, test functions are used by many researchers.
However, among the existing research literature, there is no uniform standard in how to properly use
the test functions when test the performance of an algorithm. Some test functions in literature, cannot
really reflect the performance of algorithms, and even may misguide the evaluation of the
performance of algorithms. Some classic test functions, which used to test the performance of the
algorithms, according to the optimization algorithm the existing literature, are presented in this paper,
and analyzes respective performance and feature, at the same time, makes it clear that the
performance of algorithms of different test functions suitable for testing. Therefore, it is very useful
for researchers to select the appropriate test function and accurately evaluate the performance of
algorithms.
Acknowledgements
This work is financially supported by the 2011 Basic Scientific Research Operating Expenses of
Provincial Colleges of Gansu Province Special Funds Projects.
References
[1] J. Holland: Adaptation in natural and artificial systems. Ann Arbor: University of Michigan
press (1975)
[2] K A. DeJong: Analysis of the behavior of a class of genetic adaptive systems[Ph. D. thesis], Ann
Arbor: Univ. Michigan (1975)
[3] R. X. Sun and L. S. Qu: ACTA AUTOMATICA SINICA. Vol. 26 (2006), pp. 552-556 (In
Chinese)
1336 Advances in Mechatronics and Control Engineering
[4] X. H. Dai, M. Q. Li and J. S. Kou: Journal of Software. Vol.12 (2001), pp.742-750 (In Chinese)
[5] Q. Gao, W. Z. Lu and X. S. Du: JOURNAL OF XI’AN JIAO TONG UNIVERSITY. Vol.40
(2006), pp.803-806 (In Chinese)
[6] M. Li and J. H. Li: ACTA ELECTRONICA SINICA. Vol.38 (2010), pp.2090-2094 (In Chinese)
[7] A. L.JAIMES, C. A. C. COELLO and D.CHAKRABORTY: Objective reduction using a feature
selection technique: Proceedings of the Genetic and Evolutionary Computation Conference
(GECCO 08), Atlanta: ACM Press, 673-680 (2008)
[8] J. H. Zheng, C. Zhou and K. Li: Control Theory & Applications. Vol.28 (2011), pp.947-955 (In
Chinese)
[9] J. Q. Gao and J. Wang: Applied Mathematics and Computation. Vol.217 (2011), pp.4754–4770
(In Chinese)
[10] C. S. Wang and X. Zhao: Journal of Computer Applications. Vol.30 (2010), pp.76-79 (In
Chinese)
[11] T. P. Runarsson and X. Yao: Stochastic ranking for constrained evolutionary optimization. IEEE
Transactions on Evolutionary Computation. Vol.4(2000), pp.284-294
[12] K. Deb and S. Tiwari: Omni-optimizer: European Journal of Operational Research. Vol.
185(2008), pp.062–1087.
[13] H. Chen, J. S. Zhang and C. Zhang: Control and Decision. Vol.20 (2005), pp.1300-1303 (In
Chinese)
[14] Y. J. Huang, W. G. Zhang and X. X. Liu: Journal of Northwestern Polytechnical University.
Vol.24 (2006), pp.571-575 (In Chinese)
[15] N. Srinivas and K. Deb: Evolutionary computation. MIT Press, Vol.2 (1994), pp.221-248.
[16] T. Sag and M. Cunkas: Advances in Engineering Software. Vol.40(2009), pp.902–912.
[17] A. Herreros, E. Baeyens and J. R. Peran: Engineering Applications of Artificial Intelligence.
Vol.15 (2002), pp.285-301
[18] E. Zitzler, K. Deb and L. Thiele: Evolutionary Computation. Vol.8 (2000), pp.173–195
[19] X.J. Bi and Y. J. Wang: systems engineering and electronics. Vol.33 (2011), pp.2564-2568 (In
Chinese)
[20] M. Q. Li and J. S. Kou: Acta Automatic Sinica. Vol.28 (2002), pp.497-504 (In Chinese)
[21] M. F. Zhang and C. Shao: control theory & application. Vol.25 (2008), pp.773-776 (In Chinese)
[22] K. Deb, L. Thiele, M. Laumanns and E. Zitzler: Scalable test problems for evolutionary multi-
objective optimization. TIK-Technical Report No. 112, Computer Engineering and Networks
Laboratory, Swiss Federal Institute of Technology, Zurich, Switzerland (2001)
[23] P. Cheng and Z. L. Zhang: J Tsinghua Univ (Sci & Tech). Vol.48 (2008), pp.1756-1761 (In
Chinese)
Applied Mechanics and Materials Vols. 278-280 1337
Advances in Mechatronics and Control Engineering 10.4028/www.scientific.net/AMM.278-280 Performance Test Functions of Genetic Algorithm 10.4028/www.scientific.net/AMM.278-280.1334