7
2553 Behaviour of UMDAC with Truncation Selection on Monotonous Functions Jiorn Grahl, Stefan Minner Mannheim Business School Dept. of Logistics j oern . grahltbwl . uni-mannheim. de minnerfbwl . uni-mannheim. de Abstract- Of late, much progress has been made in developing Estimation of Distribution Algo- rithms (EDA), algorithms that use probabilis- tic modelling of high quality solutions to guide their search. While experimental results on EDA behaviour are widely available, theoreti- cal results are still rare. This is especially the case for continuous EDA. In this article, we de- velop theory that predicts the behaviour of the Univariate Marginal Distribution Algorithm in the continuous domain (UMDAC) with trunca- tion selection on monotonous fitness functions. Monotonous functions are commonly used to model the algorithm behaviour far from the op- timum. Our result includes formulae to pre- dict population statistics in a specific genera- tion as well as population statistics after con- vergence. We find that population statistics de- velop identically for monotonous functions. We show that if assuming monotonous fitness func- tions, the distance that UMDAC travels across the search space is bounded and solely relies on the percentage of selected individuals and not on the structure of the fitness landscape. This can be problematic if this distance is too small for the algorithm to find the optimum. Also, by wrongly setting the selection intensity, one might not be able to explore the whole search space. 1 Introduction In recent years, much progress has been made in de- veloping optimization techniques that use probabilistic models to guide the search for optimal or high qual- ity solutions. These methods are called Estimation of Distribution Algorithms (EDA), Probabilistic Model Building Algorithms (PMBGA) or Iterated Density Es- timation Evolutionary Algorithms (IDEAs). They con- stitute a rapidly growing, yet already established field of evolutionary computation. EDA share the common feature that the joint distri- bution of promising solutions is estimated on the basis of an adequate probabilistic model. Then, new can- didate solutions are generated by sampling from the model estimated. Genetic operators like crossover and mutation which are widely used in the field of tradi- Franz Rothlauf Mannheim Business School Dept. of Information Systems 1 rothlaufOuni-mannheim. de tional genetic algorithms (GA) are not used to generate new solutions. Rigorously applying this principle, EDA for the discrete domain (for example, see [2, 6, 9]), the continuous domain (for example [1, 3]) as well as the mixed discrete-continuous domain (see for example [8]) have been developed. Their behaviour and area of applicability is most often demonstrated experimentally. But in contrast to the wide availability of experimental results, theoretical results on the behaviour of EDA are still rare. This is especially the case for real valued EDA. Our work focuses on the Univariate Marginal Distri- bution Algorithm in the continuous domain (UMDAC), first mentioned in [7]. The interested reader might want to consult the work by Gonzalez, Lozano and Larrafiaga ([4]). They analyze the UMDAC algorithm with tournament selection for linear and quadratic fit- ness functions. In our paper, we present complementary work. We analyze UMDAC behaviour when the truncation selec- tion scheme is used. We present an approach to model the behaviour on the set of monotonous functions. We focus on monotonous functions to model UMDAC be- haviour when far from the optimum in unimodal search spaces. We derive formulae that allow us to readily compute population statistics for a given generation t. Furthermore, we show how the population statistics converge, if t tends to infinity. We discuss why this behaviour can be problematic. This article is structured as follows. In section 2, the UMDAC algorithm with truncation selection is intro- duced. Afterwards, we analyze the effect of the trun- cation selection scheme for use with monotonous fit- ness functions in section 3. Then, we derive analytical expressions in section 4. First, we show how popula- tion statistics change from generation t to generation t + 1 (sections 4.3 and 4.4), then we analyze population statistics in generation t (section 4.5). Finally, we in- vestigate convergence behaviour of UMDAC in section 4.6. Our results are discussed in section 4.7. In section 5, the paper ends with conclusions and a short outlook. 2 UMDAC with truncation selection In the following paragraphs, we describe the UMDAC algorithm with truncation selection. UMDAC was introduced in [7] as a population-based 0-7803-9363-5/05/$20.00 ©2005 IEEE.

[IEEE 2005 IEEE Congress on Evolutionary Computation - Edinburgh, Scotland, UK (02-05 Sept. 2005)] 2005 IEEE Congress on Evolutionary Computation - Behaviour of UMDA>inf/inf

  • Upload
    f

  • View
    215

  • Download
    0

Embed Size (px)

Citation preview

2553

Behaviour of UMDAC with Truncation Selectionon Monotonous Functions

Jiorn Grahl, Stefan MinnerMannheim Business School

Dept. of Logisticsj oern . grahltbwl . uni-mannheim. de

minnerfbwl . uni-mannheim. de

Abstract- Of late, much progress has been madein developing Estimation of Distribution Algo-rithms (EDA), algorithms that use probabilis-tic modelling of high quality solutions to guidetheir search. While experimental results onEDA behaviour are widely available, theoreti-cal results are still rare. This is especially thecase for continuous EDA. In this article, we de-velop theory that predicts the behaviour of theUnivariate Marginal Distribution Algorithm inthe continuous domain (UMDAC) with trunca-tion selection on monotonous fitness functions.Monotonous functions are commonly used tomodel the algorithm behaviour far from the op-timum. Our result includes formulae to pre-dict population statistics in a specific genera-tion as well as population statistics after con-vergence. We find that population statistics de-velop identically for monotonous functions. Weshow that if assuming monotonous fitness func-tions, the distance that UMDAC travels acrossthe search space is bounded and solely relies onthe percentage of selected individuals and noton the structure of the fitness landscape. Thiscan be problematic if this distance is too smallfor the algorithm to find the optimum. Also,by wrongly setting the selection intensity, onemight not be able to explore the whole searchspace.

1 Introduction

In recent years, much progress has been made in de-veloping optimization techniques that use probabilisticmodels to guide the search for optimal or high qual-ity solutions. These methods are called Estimationof Distribution Algorithms (EDA), Probabilistic ModelBuilding Algorithms (PMBGA) or Iterated Density Es-timation Evolutionary Algorithms (IDEAs). They con-stitute a rapidly growing, yet already established fieldof evolutionary computation.EDA share the common feature that the joint distri-

bution of promising solutions is estimated on the basisof an adequate probabilistic model. Then, new can-didate solutions are generated by sampling from themodel estimated. Genetic operators like crossover andmutation which are widely used in the field of tradi-

Franz RothlaufMannheim Business School

Dept. of Information Systems 1rothlaufOuni-mannheim. de

tional genetic algorithms (GA) are not used to generatenew solutions. Rigorously applying this principle, EDAfor the discrete domain (for example, see [2, 6, 9]), thecontinuous domain (for example [1, 3]) as well as themixed discrete-continuous domain (see for example [8])have been developed.

Their behaviour and area of applicability is mostoften demonstrated experimentally. But in contrast tothe wide availability of experimental results, theoreticalresults on the behaviour of EDA are still rare. This isespecially the case for real valued EDA.

Our work focuses on the Univariate Marginal Distri-bution Algorithm in the continuous domain (UMDAC),first mentioned in [7]. The interested reader mightwant to consult the work by Gonzalez, Lozano andLarrafiaga ([4]). They analyze the UMDAC algorithmwith tournament selection for linear and quadratic fit-ness functions.

In our paper, we present complementary work. Weanalyze UMDAC behaviour when the truncation selec-tion scheme is used. We present an approach to modelthe behaviour on the set of monotonous functions. Wefocus on monotonous functions to model UMDAC be-haviour when far from the optimum in unimodal searchspaces. We derive formulae that allow us to readilycompute population statistics for a given generationt. Furthermore, we show how the population statisticsconverge, if t tends to infinity. We discuss why thisbehaviour can be problematic.

This article is structured as follows. In section 2, theUMDAC algorithm with truncation selection is intro-duced. Afterwards, we analyze the effect of the trun-cation selection scheme for use with monotonous fit-ness functions in section 3. Then, we derive analyticalexpressions in section 4. First, we show how popula-tion statistics change from generation t to generationt+1 (sections 4.3 and 4.4), then we analyze populationstatistics in generation t (section 4.5). Finally, we in-vestigate convergence behaviour of UMDAC in section4.6. Our results are discussed in section 4.7. In section5, the paper ends with conclusions and a short outlook.

2 UMDAC with truncation selection

In the following paragraphs, we describe the UMDACalgorithm with truncation selection.UMDAC was introduced in [7] as a population-based

0-7803-9363-5/05/$20.00 ©2005 IEEE.

2554

technique for optimizing continuous functions. If weassume, that we want to maximize an n-dimensionalfunction f by using the UMDAC algorithm, a singlesolution is represented as a vector x E ]R'. UsingUMDAC, we seek to find the vector x* that maximizesf.A first population of candidate solutions is sampled

uniformly from the set of feasible solutions. The fitnessof each individual is evaluated using f. Now we usetruncation selection of the best a - 100% individuals.This means, given a population size of Z individuals,we select the a Z best individuals. Selection, pushesthe population towards promising regions of the searchspace.

From these selected individuals the following prob-abilistic model is estimated. In UMDAC it is assumed,that the joint distribution of the selected individualsfollows a n-dimensional normal distribution that fac-torizes over n univariate normals. That is, the covari-ances between all xi and xj are 0 for all i + j. Thus, ingeneration t, the n variables Xi...n follow a univariatenormal distribution with mean and standard devia-tion ct:

XM rVN( t,at

P(Xi = xi) = b1t,t (1)

1 exp (xi _ )2 }

The parameters and ci are estimated from theselected individuals by using the well known maximumlikelihood estimators for moments of the univariate nor-mal distribution.

Now, new individuals are generated by samplingfrom the estimated joint density. These individualscompletely replace the old population. The algorithmcontinues with truncation selection again. This processis iterated until a termination criterion is met. For fur-ther details on UMDAC, we refer to [7].

In UMDAC, the essential parameters are and crl.We are interested in how these parameters change overtime. Therefore, in the following section we make someassumptions on the structure of f.

3 Monotonous fitness functions andtruncation selection

In the previous section, the UMDAC algorithm was in-troduced. We saw that a fitness function f needs tobe specified for evaluating the population. In manywork on parameter optimization, highly complex fit-ness functions are used as benchmark problems. Thesefunctions exhibit deceptive structures, multiple optima,and other features that makes optimizing them a stifftest.

In this article, we make simplified assumptions onthe structure of f. In particular, we focus on the be-haviour of UDMAc on monotonous functions. This set

of functions includes, but is not limited to linear ones.This can be seen as a way to model the structure of thesearch space far away from the optimum. A nice sideeffect is reduced complexity of our analytical analysis.

In the following we will introduce monotonous fit-ness functions. Furthermore, We will show an interest-ing result that occurs when combining truncation se-lection with monotonous fitness functions. This resultwill be of some importance later on.

Let S be a population of individuals. LetXi and Xk E S be two distinct individuals of the pop-ulation and let gj: IR -- IR be a fitness function definedover the i - th gene of the individuals (denoted as Xqand X2k). Then,

gi is increasing if Xj < Xk implies that

g,(Xi') <g(Xik) VXjXkE S

gi is decreasing if Xj < Xk implies that

gi(Xii) > gi(Xk) VXi,X E S.

(2)

We consider a fitness landscape monotonous if thefitness function f is either increasing or decreasing.Note that the class of monotonous functions includes,but is not limited to linear functions.

Assume that a population P of individuals is given.We use 1 different increasing functions fi...I to evalu-ate this population P. After each evaluation process,we use truncation selection of the best a 100% of theindividuals and call the 1 sets of selected individualsMl...1. It is a simple, yet interesting fact that all setsMl... have to be identical. Note that the fitness of theselected individuals may of course be different. For ouranalysis it is more important that the selected individ-uals are identical. Note that if all fitness functions aredecreasing, this fact is true as well.UMDAC uses density estimation and sampling to

generate new candidate solutions. In the density esti-mation process, the fitness of the individuals is not con-sidered. Density estimation solely relies on the geno-types of the selected individuals, which is the x. As theparameters and o4 are estimated from the x, theyare identical for all fi..

This fact simplifies our further analysis. We cannow state that the UMDAC will behave the same forall increasing fitness functions (and for all decreasingfunctions). Thus, we can base our analysis on the sim-plest monotonous function that is the linear one. Yet,we know that our results are valid for all monotonousfunctions.

4 UMDAC for monotonous fitness func-tions

In the following paragraphs, we model the behaviour ofUMDAC with truncation selection on fitness functionsof the type

-

f(x) =E gi(xi),i=1

(3)

2554

2555

where gi(xi) is an increasing function. Note, that thecase of decreasing functions does not provide additionalinsight. Thus, we focus our analysis on increasing func-tions.

The specific structure of the fitness function allowsus a decomposition. UMDAC factorizes over n univari-ate normals. The fitness function consists of a sum ofn univariate monotonous functions.

Thus, we can reduce our analysis to the analysis ofone single gi (xi). We develop mathematical expressionsthat model how i and ai change over time. Morespecifically, we are interested in t+1 and ai+' giventhe corresponding parameter values at generation t.

Furthermore, we analyze population statistics for aspecific generation and the limit behaviour of UMDAC.This means we investigate the convergence of ando4 for t -* oc. These calculations are made under theassumption of an infinite population size. For finitesamples, they can be seen as an approximation. Here,appropriate order statistics might be used.

4.1 Notation

The following symbols are used in our analysis:

q(x) Standard normal density at value xb, (x) Normal density with mean and stan-

dard deviation a at value xb(x) Cumulative standard normal density, x

100% quantile(,, (x) Cumulative density of normal distribu-

tion with mean and standard devia-tion a', x 100% quantile

b 1(x) Quantile function of standard normaldensity

'T (x) Quantile function of normal distributionwith mean and standard deviation a

4.2 Monotonous fitness functions and trunca-tion selection

We analyze the truncation selection step in presence offitness functions of type (3).

Due to the structure of UMDA,'s probabilisticmodel and the structure of f(x), we can decomposethe fitness function and analyze the behaviour of each

i and vi independently.As we have seen in section 3, we can simplify our

approach even further and replace all gi(xi) by linearfunctions of the form yi (xi) - ai xi + bi and study howtruncation selection influences the population statis-tics.

In UMDAC, new candidate solutions are generatedby sampling new individuals xi from a normal distribu-tion with mean and variance (a_)2. The fitness yi isobtained from a linear function ym = at xi + bi. As thex are realizations of a random variable, the fitness alsois a random variable. The distribution of the fitness Ycan be expressed in terms of a normal random variable

with mean f and variance a2:

f =ai. i+bi"2 = a 2 * (cr(t)2 (4)

By truncation selection, the best a 100% of the indi-viduals are selected. Thus, all individuals with a fitnesslarger than a fitness minimum of Ym are selected. Theirprobabilities sum up to a. Since the fitness is normallydistributed, Ym can be obtained from the quantile func-tion of the normal distribution as follows:

Ym = (Df I

- a)

-, '(1-a).aita+ai i+bi

Statistically, the fitness distribution is truncated frombelow at Ym. We refer to Ym as the fitness truncationpoint. Now, we are interested in the individual xm thatcorresponds to the fitness value Ym. All individualsXi > xm are selected. We call xm the correspondingpopulation truncation point. This is illustrated graphi-cally in figure 1. The fitness truncation point Ym canbe transformed into the population truncation xm asfollows.

Xm y '(yi)

(Ym- bi)ai

( l(1 ) aot4 + ai it + bi-biai =A0

= @t a(1- a)(5)

It is interesting to see that obviously the truncationpoint is independent from ai and bi(ai > 0). Put dif-ferently, no matter which linear function with ai > 0we choose, the population truncation point remains thesame. Thus, the effect of selection is independent fromai and bi, for all ai > 0. No matter how these parame-ters are chosen, the same individuals are selected.

As we have seen in this section, the selected individ-uals are identical for all linear functions yi(xi), whereai > 0. Furthermore, the population truncation pointsolely relies on statistical parameters of the population.Thus, the selected individuals can be obtained from thepopulation statistics directly without taking a look atthe fitness landscape.

4.3 Change of t to t

We have seen that the effect of truncation selection isidentical for all linear functions y2 with positive gra-dient. We have also shown, that for predicting thebehaviour of UMDAC we do not need to model thedistribution of the fitness. The change in populationstatistics can be obtained from the population statis-tics directly. In this section, we derive mathematicalexpressions for the change of the population mean fromgeneration t to generation t + 1.

2555

2556

P(xj)

Selected best ao 100% of the individuals

xiPopulation density

Xm

P(Y(Xi))(b)

y(xj)f Ym

Fitness distribution

y(xj)(c)

Xi

Fitness function

Figure 1: Impact of truncation selection of the best a 100% individuals on the fitness distribution (b) and thepopulation density (a). We assume an increasing fitness function (c). The fitness distribution is truncated fromthe left in Ytm. The population distribution is truncated from the left in the corresponding point xm.

We model selection by truncation of the normallydistributed population density. Assume that a popula-tion is distributed with mean t and standard deviationat. The fitnesses of the individuals are calculated andthe best al 100% individuals are selected. This equalsa left-truncation of the normal distribution in xm.

To do this, we first refer to results from economet-ric literature on the truncated normal distribution (see[5], appendix). We start with presenting the momentsof a doubly truncated normal distribution. A dou-bly truncated normal distribution with mean andvariance o2, where xa < x < Xb can be modeledas a conditional density where x E A = [Xa, Xb] and-00 < Xa < Xb < +00

f(XJXEA)= (x_ __ (6),CD(xb iL) (D(Xa tO)

The moment generating function of this distribution is:

m(t) E(etX X E A)

_ept+(T2t2/2 (jj( Xb -at) _ (a -at)b, (Xbe ) - X

(reI)(7)

F'rom the moment generating function, we can derivethe statistical moments of the distribution. We areinterested in the mean. It can be computed as follows:

E(XjX E A) = m'(t)lt=o_ a. (X o )- 0(Xa t) (8)

41 (Xla ) -( aX0 P)

We are not interested in the mean of a doubly trun-cated normal distribution, but in the mean of a left-truncated normal distribution. Thus, we now let Xbtend to infinity. This results in:

E(XIX>Xa) = + f(Xa or) (9)

From section 4.2 we know that xa Xm =x1, tt(l-a). Inserting and rearranging leads to:

t+1 = E(XIX >Xm)=t at ¢ (ID I (ao))= +i a

where d(a) =o ( (a))

a(10)

Note that the mean of the population after applyingtruncation selection can now be easily computed. Thefactor d(a) is illustrated in figure 2. It can be seen thatfor a -* 1 the factor d(a) converges to 0 leaving themean of the population unchanged in t + 1.

4.4 Change of at to at+1

Again, we model truncation selection by truncation ofthe normally distributed population density. There-fore, we again make use of the moment generating func-

2556

=t + a' - d(a),i i

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

a

Figure 2: Illustration of c(a) and d(a)

tion as in section 4.3. We let Xb -t oc. Finally, we get:

Var(XjX > xm) = E(X21X > xm)-E(XIX > Xm)2

2tP . (Om )

Doing this we get the following result for the mean

after some calculations. In generation t > 0, the meant can be computed as:

t

0t = ° + a * c(a) *E dai=l

(13)

0 i 1|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||2|||||

--|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||a||||||||||||||||||||||||||||||||||

I -(a) OD|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||it)||||||||||||||||||||||||(11) Similarly, in generation t ,the variance o-2 can be

computed as:

We use this equation in the context of our model by as-

signing appropriate indices, inserting xm, simplifying,and rearranging. This leads us to:

(01t+1 2 = <t)2 *1+ ( a) (a()) _

l(t)2 C(a)

where c±

1(1_ a) ( 1(a))where c(ae) = 1 +

a¢)(4>lo)) 9 (12)

Now, we can compute the population variance int + 1, given the population variance in t. The factorc(a) is plotted in figure 2. It can be seen, that if a 1,the factor c(a) converges to 1, leaving the variance ingeneration t + 1 unchanged.

4.5 Population statistics in generation t

The last two subsections examined the change of thepopulation statistics from one generation to the nextgeneration.

Now, we calculate how the population mean andvariance depend on t. To obtain the correspondingpopulation statistics, we sum up the iterative formulaethat have been developed in 4.3 and 4.4.

(a')' = (aot) c(a) (14)

4.6 Convergence of population statistics for t00

In this section, we analyze convergence of UMDAC.That means that we analyze how the population statis-tics develop over time, assuming that t -- oo.

First, we consider the mean. Therefore, we makeuse of (13). Note that the sum is the only part of theexpression that depends on t. This leads us to:

t

lim t= + O * c(a) lim kE d()( ) (15)t-+oo 0t-+oo LE~[

infinite geometric series

The last part is an infinite geometric series that can besimplified. After doing this, we get the following result:

0 1+ d(-a)lim = + a c(a) l-d(a)

We can decompose the last factor, leading to:

(16)

lim = + 1+ +e()z( (17)t- oo -Z(a)

with:

e(a) = (D '((-or)h(a) = $(e((a))

a

z(a) =e(a) h(a)

(18)

2557

2557

2558

This expression allows us to compute the maximumdistance, that UMDA,'s mean will move across thesearch space for a given selection intensity of a 100%and monotonous fitness functions.

Now, we consider the variance. We make use of (14)and let t tend to infinity. Note that 0 < c(a) < 1. Thisleads to

lim(ot)2 = lim [(o0)2 *c(a)] (19)t-*+oo t--+oo

=0

Thus, the variance converges towards 0.

4.7 Interpretation of results

In the previous paragraphs, we derived expressions todescribe the behaviour of the UMDAC algorithm withtruncation selection on monotonous functions.We have seen that the algorithm converges since the

population variance converges towards 0. The maximaldistance that the mean of the population can moveacross the search space is bounded. This distance solelydepends on

* the mean of the first population,

* the variance of the first population,

* and the selection intensity a.

This has some important effects on the behaviour ofUMDAC. First, if the optimum of the search space liesoutside this maximal distance, the algorithm can notfind it and one will experience premature convergence.Furthermore, the first population is usually sampleduniformly in the space of feasible solutions. The ex-ploration of the search space relies on density estima-tion and sampling. However, by choosing the amountof individuals that are selected, one can adjust themaximal distance that the mean of the population willmove. One needs to be careful when choosing the selec-tion intensity a. By wrongly setting a, the algorithmmight not even be able to sample all feasible points.In essence, this results agree with the results from [4].They show that for simple linear functions the sameproblems exist when the tournament selection schemeis used.

5 Conclusion

In this article, we have analyzed the behaviour ofUMDAC on monotonous fitness functions. This set offunctions includes, but is not limited to, linear func-tions. Therefore, we have developed mathematical ex-pressions for the mean and the variance of the popula-tion.

Our findings are as follows. First, we have shownthat using truncation selection of the best a - 100% ofthe individuals has an interesting effect. In this case, alinear fitness with any positive (negative) gradient can

be used as a valid replacement for any increasing (de-creasing) function. This replacement has no effect onthe population statistics but makes the analysis easier.Furthermore, results obtained for linear fitness func-tions with positive (negative) gradient are valid for allincreasing (decreasing) fitness functions.

Then we have analyzed the population statistics(mean and variance) for a given generation t. For doingthis the fitness landscape does not need to be modeledexplicitly.We showed how the convergence behaviour of

UMDAC depends on the selection pressure a. Fur-thermore, we obtained the maximal distance thatUMDAC's mean will move across the search space fora monotonous function and found that UMDAC willbehave identically on all of these fitness functions.

This can be problematic if the optimum lies out-side this maximal distance. In this case, the algorithmwill not find it but converge before. Furthermore, bywrongly setting a, the algorithm might even not beable to explore the entire search space at all. These areimportant limitations of the UMDAC algorithm withtruncation selection.

Work is underway to apply the findings describedabove to predict success probability of UMDAC. Also,we want to model UMDAC behaviour on peaks andanalyze the case that covariances between the xi aredifferent from zero.

Bibliography

[1] C. W. Ahn, R. S. Ramakrishna, and D. E. Gold-berg. The real-coded bayesian optimization algo-rithm: Bringing the strength of boa into the con-

tinuous world. In Proceedings of the GECCO 2004,2004.

[2] S. Baluja. Population-based incremental learning:A method for integrating genetic search based func-tion optimization and competitive learning. Techni-cal Report CMU-CS-94-163, Carnegie Mellon Uni-versity, 1994.

[3] P. A. N. Bosman. Design and Application of Iter-ated Density-Estimation Evolutionary Algorithms.PhD thesis, University of Utrecht, Institute of In-formation and Computer Science, 2003.

[4] C. Gonza'lez, J. A. Lozano, and P. Larrafnaga.Mathematical modelling of UMDAc algorithm withtournament selection. Behaviour on linear andquadratic functions. International Journal of Ap-proximate Reasoning, 31(3):313-340, 2002.

[5] W. H. Greene. Econometric analysis, volume 5.Prentice Hall, Upper Saddle River, 2003.

[6] G. Harik, F. G. Lobo, and D. E. Goldberg. Thecompact genetic algorithm. In Proceedings of theIEEE Conference on Evolutionary Computation,pages 523-528, 1998.

2558

2559

[7] P. Larrafiaga, R. Etxeberria, J. A. Lozano, andJ. M. Peiia. Optimization in continuous domains bylearning and simulation of Gaussian networks. InA. S. Wu, editor, Proceedings of the 2000 Geneticand Evolutionary Computation Conference Work-shop Program, pages 201-204, 2000.

[8] J. Ocenasek and J. Schwarz. Estimation of distribu-tion algorithm for mixed continuous-discrete opti-mization problems. In 2nd Euro-International Sym-posium on Computational Intelligence, pages 227-232, 2002.

[9] Martin Pelikan. Bayesian optimization algorithm:From single level to hierarchy. PhD thesis, Uni-versity of Illinois at Urbana-Champaign, Dept. ofComputer Science, Urbana, IL, 2002. Also IlliGALReport No. 2002023.

2559