39
Chapter 8: Introduction to Evolutionary Computation Chapter 8: Introduction to Evolutionary Computation Computational Intelligence: Second Edition Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Computational Intelligence: Second Edition

  • Upload
    others

  • View
    12

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

Chapter 8: Introduction to Evolutionary Computation

Computational Intelligence: Second Edition

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 2: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Contents

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 3: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Some Theories about Evolution

Evolution is an optimization process:the aim is to improve the ability of an organism to survive indynamically changing and competitive environments

Two main theories on biological evolution:

Lamarckian (1744-1829) viewDarwinian (1809-1882) view

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 4: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Lamarckian Evolution

Heredity, i.e. the inheritance of acquired traits

Individuals adapt during their lifetimes, and transmit theirtraits to their offspring

Offspring continue to adapt

Method of adaptation rests on the concept of use and disuse

Over time individuals lose characteristics they do not requireand develop those which are useful by “exercising” them

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 5: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Darwinian Evolution

Theory of natural selection and survival of the fittestIn a world with limited resources and stable populations, eachindividual competes with others for survival

Those individuals with the “best” characteristics (traits) aremore likely to survive and to reproduce, and thosecharacteristics will be passed on to their offspring

These desirable characteristics are inherited by the followinggenerations, and (over time) become dominant among thepopulation

During production of a child organism, random events causerandom changes to the child organism’s characteristics

What about Alfred Wallace?

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 6: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Evolutionary Computation

Evolutionary computation (EC) refers to computer-based problemsolving systems that use computational models of evolutionaryprocesses, such as

natural selection,

survival of the fittest, and

reproduction,

as the fundamental components of such computational systems

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 7: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Main Components of an Evolutionary Algorithm

Evolution via natural selection of a randomly chosenpopulation of individuals can be thought of as a searchthrough the space of possible chromosome values

In this sense, an evolutionary algorithm (EA) is a stochasticsearch for an optimal solution to a given problem

Main components of an EA:an encoding of solutions to the problem as a chromosome;a function to evaluate the fitness, or survival strength ofindividuals;initialization of the initial population;selection operators; andreproduction operators.

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 8: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Generic Evolutionary Algorithm: Algorithm 8.1

Let t = 0 be the generation counter;Create and initialize an nx -dimensional population, C(0), to consistof ns individuals;while stopping condition(s) not true do

Evaluate the fitness, f (xi (t)), of each individual, xi (t);Perform reproduction to create offspring;Select the new population, C(t + 1);Advance to the new generation, i.e. t = t + 1;

end

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 9: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Evolutionary Computation Paradigms

Genetic algorithms model genetic evolution.Genetic programming, based on genetic algorithms, butindividuals are programs.Evolutionary programming, derived from the simulation ofadaptive behavior in evolution (i.e. phenotypic evolution).Evolution strategies, geared toward modeling the strategicparameters that control variation in evolution, i.e. theevolution of evolution.Differential evolution, similar to genetic algorithms, differingin the reproduction mechanism used.Cultural evolution, which models the evolution of cultureCo-evolution, in competition or cooperation

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 10: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

The Chromosome

Characteristics of individuals are represented by long strings ofinformation contained in the chromosomes of the organism

Chromosomes are structures of compact intertwined moleculesof DNA, found in the nucleus of organic cells

Each chromosome contains a large number of genes, where agene is the unit of heredity

An alternative form of a gene is referred to as an allele

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 11: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

The “Artificial” Chromosome

In the context of EC, the following notation is used:

Chromosome (genome) Candidate solutionGene A single variable to be optimizedAllelle Assignment of a value to a variable

Two classes of evolutionary information:

A genotype describes the genetic composition of an individual,as inherited from its parents

A phenotype is the expressed behavioral traits of an individualin a specific environment

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 12: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Binary Representation

Binary-valued variables: xi ∈ {0, 1}nx , i.e. each xij ∈ {0, 1}Nominal-valued variables: each nominal value can be encodedas an nd -dimensional bit vector where 2nd is the total numberof discrete nominal values for that variable

Continuous-valued variables: map the continuous search spaceto a binary-valued search space:

Each continuous-valued variable is mapped to annd -dimensional bit vector

φ : R → (0, 1)nd

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 13: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Binary Representation (cont)

Transform each individual

x = (x1, . . . , xj , . . . , xnx )

with xj ∈ R to the binary-valued individual,

b = (b1, . . . ,bj , . . . ,bnx )

wherebj = (b(j−1)nd+1, . . . , bjnd

)

with bl ∈ {0, 1} and the total number of bits, nb = nxnd

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 14: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Binary Representation (cont)

Decoding each bj back to a floating-point representation:

Φj(b) = xmin,j +xmax ,j − xmin,j

2nd − 1

(nd−1∑l=1

bj(nd−l)2l

)

That is, use the mapping,

Φj : {0, 1}nd → [xmin,j , xmax ,j ]

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 15: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Binary Representation (cont)

Problems with using a binary representation:

Conversion form a floating-point value to a bitstring of nd bitshas the maximum attainable accuracy

xmax,j−xmin,j

2nd−1 for eachvector component, j = 1, . . . , nx

Binary coding introduces Hamming cliffs:

Hamming cliff is formed when twonumerically adjacent values have bitrepresentations that are far apart

710 = 01112 vs 810 = 10002

Large change in solution is needed forsmall change in fitness

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 16: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Gray Coding

In Gray coding, the Hamming distance between therepresentation of successive numerical values is oneFor 3-bit representations:

Binary Gray

0 000 0001 001 0012 010 0113 011 0104 100 1105 101 1116 110 1017 111 100

Converting binary strings to Gray bitstrings:

g1 = b1

gl = bl−1bl + bl−1bl

where bl is bit l of the binary number

b1b2 · · · bnb

b1 is the most significant bitComputational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 17: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Gray Coding (cont)

A Gray code representation, bj can be converted to afloating-point representation using

Φj(b) = xmin,j+xmax ,j − xmin,j

2nd − 1

nd−1∑l=1

nd−l∑q=1

b(j−1)nd+q

mod 2

2l

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 18: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Other Representations

Other representations:

Real-valued representations, where xij ∈ RInteger-valued representations, where xij ∈ ZDiscrete-valued representations, where xij ∈ dom(xj)

Tree-based representations as used in Genetic Programming

Mixed representations

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 19: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Initialization

A population of candidate solutions is maintained

Values are randomly assigned to each gene from the domainof the corresponding variable

Uniform random initialization is used to ensure that the initialpopulation is a uniform representation of the entire searchspace

What are the consequences of the size of the initialpopulation?

Computational complexity per generationNumber of generations to convergeQuality of solutions obtained – Exploration ability

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 20: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Survival of the Fittest

Based on the Darwinian model, individuals with the bestcharacteristics have the best chance to survive and toreproduce

A fitness function is used to determine the ability of anindividual of an EA to survive

The fitness function, f , maps a chromosome representationinto a scalar value:

f : Γnx → R

where Γ represents the data type of the elements of annx -dimensional chromosome

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 21: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Objective Function

The fitness function represents the objective function, Ψ,which represents the optimization problemSometimes the chromosome representation does notcorrespond to the representation expected by the objectivefunction, in which case

f : SCΦ→ SX

Ψ→ R Υ→ R+

SC represents the search space of the objective functionΦ represents the chromosome decoding functionΨ represents the objective function

Υ represents a scaling functionFor example:

f : {0, 1}nbΦ→ Rnx

Ψ→ R Υ→ R+

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 22: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Objective Function Types

Types of objective functions, resulting in different problem types:

Unconstrained objective functions, but still subject toboundary constraints

Constrained objective functions

Multi-objective functions, where more than one objective hasto be optimized

Dynamic or noisy objective functions

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 23: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Selection Operators

Relates directly to the concept of survival of the fittest

The main objective of selection operators is to emphasizebetter solutionsSelection steps in an EA:

Selection of the new populationSelection of parents during reproduction

Selective pressure:Takeover timeRelates to the time it requires to produce a uniform populationDefined as the speed at which the best solution will occupy theentire population by repeated application of the selectionoperator aloneHigh selective pressure reduces population diversity

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 24: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Random Selection

Each individual has a probability of 1ns

to be selected

ns is the size of the population

No fitness information is used

Random selection has the lowest selective pressure

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 25: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Proportional Selection

Biases selection towards the most-fit individuals

A probability distribution proportional to the fitness is created,and individuals are selected by sampling the distribution,

ϕs(xi (t)) =fΥ(xi (t))∑nsl=1 fΥ(xl(t))

ns is the total number of individuals in the populationϕs(xi ) is the probability that xi will be selectedfΥ(xi ) is the scaled fitness of xi , to produce a positivefloating-point value

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 26: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Proportional Selection: Sampling Methods

Roulette wheel selection: Assuming maximization andnormalized fitness values (Algorithm 8.2):

Let i = 1, where i denotes the chromosome index;Calculate ϕs(xi );sum = ϕs(xi );Choose r ∼ U(0, 1);while sum < r do

i = i + 1, i.e. advance to the next chromosome;sum = sum + ϕs(xi );

endReturn xi as the selected individual;

High selective pressure

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 27: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Proportional Selection: Stochastic Universal Sampling

Uses a single random value to sample all of the solutions bychoosing them at evenly spaced intervals

Like roulette wheel sampling, construct a wheel with sectionsfor each of the ns chromosomes

Instead of one hand that is spun once for each sample, amulti-hand is spunt just once

The hand has ns arms, equally spaced

The number of times chromosome i is sampled is the numberof arms that fall into the chromosomes section of the wheel

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 28: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Tournament Selection

Selects a group of nts individuals randomly from thepopulation, where nts < ns

Select the best individual from this tournament

For crossover with two parents, tournament selection is donetwice

Tournament selection limits the chance of the best individualto dominate

Large nts versus small nts

Selective pressure depends on the value of nts

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 29: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Rank-Based Selection

Uses the rank ordering of fitness values to determine theprobability of selection

Selection is independent of absolute fitness

Non-deterministic linear sampling:

Selects an individual, xi , such that i ∼ U(0,U(0, ns − 1))The individuals are sorted in decreasing order of fitness valueRank 0 is the best individualRank ns − 1 is the worst individual

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 30: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Rank-Based Selection (cont)

Linear ranking:

Assumes that the best individual creates λ̂ offspring, and theworst individual λ̃1 ≤ λ̂ ≤ 2 and λ̃ = 2− λ̂The selection probability of each individual is calculated as

ϕs(xi (t)) =λ̃ + (fr (xi (t))/(ns − 1))(λ̂− λ̃)

ns

where fr (xi (t)) is the rank of xi (t)

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 31: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Rank-Based Selection (cont)

Nonlinear ranking:

ϕs(xi (t)) =1− e−fr (xi (t))

β

ϕs(xi ) = ν(1− ν)np−1−fr (xi )

fr (xi ) is the rank of xi

β is a normalization constantν indicates the probability of selecting the next individual

Rank-based selection operators may use any sampling methodto select individuals

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 32: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Boltzmann Selection

Based on the thermodynamical principles of simulatedannealing

Selection probability:

ϕ(xi (t)) =1

1 + ef (xi (t))/T (t)

T (t) is the temperature parameter

An initial large value ensures that all individuals have an equalprobability of being selected

As T (t) becomes smaller, selection focuses more on the goodindividuals

Can use any sampling method

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 33: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Boltzmann Selection (cont)

Boltzmann selection can be used to select between twoindividuals

For example, if

U(0, 1) >1

1 + e(f (xi (t))−f (x′i (t)))/T (t)

then x′i (t) is selected; otherwise, xi (t) is selected

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 34: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

(µ +, λ)-Selection

Deterministic rank-based selection methods used inevolutionary strategies

µ indicates the number of parents

λ is the number of offspring produced from each parent

(µ, λ)-selection selects the best µ offspring for the nextpopulation

(µ + λ)-selection selects the best µ individuals from both theparents and the offspring

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 35: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Elitism

Ensures that the best individuals of the current populationsurvive to the next generation

The best individuals are copied to the new population withoutbeing mutated

The more individuals that survive to the next generation, theless the diversity of the new population

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 36: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Hall of Fame

Similar to the list of best players of an arcade game

For each generation, the best individual is selected to beinserted into the hall of fame

Contain an archive of the best individuals found from the firstgeneration

The hall of fame can be used as a parent pool for thecrossover operator

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 37: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

Types of Operators

Reproduction: process of producing offspring from selectedparents by applying crossover and/or mutation operatorsCrossover is the process of creating one or more newindividuals through the combination of genetic materialrandomly selected from two or more parentsMutation:

The process of randomly changing the values of genes in achromosomeThe main objective is to introduce new genetic material intothe population, thereby increasing genetic diversityMutation probability and step sizes should be smallProportional to the fitness of the individual?Start with large mutation probability, decreased over time?

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 38: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

When to Stop

Limit the number of generations, or fitness evaluations

Stop when population has converged:

Terminate when no improvement is observed over a number ofconsecutive generationsTerminate when there is no change in the populationTerminate when an acceptable solution has been foundTerminate when the objective function slope is approximatelyzero

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation

Page 39: Computational Intelligence: Second Edition

Chapter 8: Introduction to Evolutionary Computation

IntroductionGeneric Evolutionary AlgorithmRepresentationInitial PopulationFitness FunctionSelectionReproduction OperatorsStopping ConditionsEvolutionary Computation versus Classical Optimization

EC vs CO

The search process:

CO uses deterministic rules to move from one point in thesearch space to the next pointEC uses probabilistic transition rulesEC applies a parallel search of the search space, while CO usesa sequential search

Search surface information:

CO uses derivative information, usually first-order orsecond-order, of the search space to guide the path to theoptimumEC uses no derivative information, but fitness information

Computational Intelligence: Second Edition Chapter 8: Introduction to Evolutionary Computation