Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Outlines
• Unconstrained Optimization
• Ackley’s Function
• GA Approach for Ackley’s Function
• Nonlinear Programming
• Penalty Function
• Genetic Operators
• Numarical Examples
2
Optimization plays a central role in operations
research/management science and engineering design
problems.
It deals with problems of minimizing or maximizing a function
with several variables usually subject to equality and/or
inequality constraints.
Optimization techniques have had an increasingly great
impact on our society.
Both the number and variety of their applications continue to
grow rapidly, and no slowdown is in sight.
3
Unconstrained Optimization
However, many engineering design problems are very
complex in nature and difficult to solve with conventional
optimization techniques.
In recent years, genetic algorithms have received
considerable attention regarding their potential as a novel
optimization technique.
In this lecture we will discuss the applications of genetic
algorithms to unconstrained optimization, nonlinear
programming, stochastic programming, goal programming,
and interval programming.
4
Unconstrained Optimization
Unconstrained optimization deals with the problem of
minimizing or maximizing a function in the absence of any
restrictions.
In general, an unconstrained optimization problem can be
mathematically represented as follows.
min 𝑓(𝑥)
Subject to 𝑥 ∈ Ω
where 𝑓 is a real-valued function and Ω, the feasible set, is a
subset of 𝐸𝑛
5
Unconstrained Optimization
A point 𝑥∗ ∈ Ω is said to be a local minima of 𝑓 over Ω if
there is an > 0 such that f(x) f(x*) for all 𝑥 ∈ Ω within a
distance of x*.
A point 𝑥∗ ∈ Ω is said to be a global minima of 𝑓 over Ω if f(x)
f(x*) for all 𝑥 ∈ Ω.
The necessary conditions for local minima are based on the
differential calculus of 𝑓, that is, the gradient of 𝑓 defined as
follows:
6
Unconstrained Optimization
The Hessian of f at x denoted as 𝛻2𝑓(𝑥) or F(x) is defined as
Even though most practical optimization problems have side
restrictions that must be satisfied, the study of techniques for
unconstrained optimization provides a basis for the further
studies.
In this lecture, we will discuss how to solve the
unconstrained optimization problem with genetic algorithms.
7
Unconstrained Optimization
Ackley's function is a continuous and multimodal test
function obtained by modulating an exponential function with
a cosine wave of moderate amplitude.
Its topology, as shown in Figure 1, is characterized by an
almost flat outer region and a central hole or peak where
modulations by the cosine wave become more and more
influential. Ackley's function is as follows.
8
Ackley's Function
Ackley's function is as follows
where c1 = 20, c2 = 0.2, c3 = 2𝜋, and e = 2.71282.
The known optimal solution is (x1, x2) = 0 and f(x1,x2)= 0.
9
Ackley's Function
10
Ackley's Function
Fig. 1. Plot of Ackley’s Function
11
Ackley's Function
Fig. 2. Contour plot of Ackley’s Function
As Ackley pointed out, this function causes moderate
complications to the search.
Because a strictly local optimization algorithm that
performs hill climbing would surely get trapped in a local
optimum.
A search strategy that scans a slightly bigger
neighborhood would be able to cross intervening valleys.
Therefore, Ackley's function provides one of the
reasonable test cases for genetic search.
12
Ackley's Function
To minimize Ackley's function, we simply use the
following implementation of the genetic algorithm.
1. Real number encoding
2. Arithmetic crossover
3. Nonuniform mutation
4. Top pop_size selection
13
Minimization of Ackley's Function
The arithmetic crossover is defined as the
combination of two chromosomes v1 and v2 as
follows.
v1new= v1+(1-)v2
v2new= v2+(1-)v1
where is a uniforly distributed random number
between 0 and 1.
14
Minimization of Ackley's Function
The nonuniform mutation is given as follows:
For a given parent v, if the element xk of it is
selected for mutation, the resulting offspring is vnew
= [x1, x2, x3,…..,xn], where xknew is randomly
selected from two possible choices:
xknew= xk+(t, xkU-xk)
or
xknew= xk-(t, xk-xkL)
15
Minimization of Ackley's Function
The nonuniform mutation is given as follows:
For a given parent v, if the element xk of it is selected for
mutation, the resulting offspring is vnew = [x1, x2,
x3,…..,xn], where xknew is randomly selected from two
possible choices:
xknew= xk+(t, xkU-xk)
xknew= xk-(t, xk-xkL)
where xkU and xk
L are the upper and lower bounds for Xk
16
Minimization of Ackley's Function
The function (t, y) returns a value in the range
[0, y] such that the value of (t, y) approaches to 0
as t increases (t is the generation number) as
follows:
(t, y)=y ×r ×(1-t / T)b
where 𝑟 is a random number from [0, 1], 𝑇 is the
maximal generation number, and 𝑏 is a parameter
determining the degree of nonuniformity. 17
Minimization of Ackley's Function
Top pop_size selection produces the next
generation by selecting the best pop_size
chromosomes from parents and offspring.
For this case, we can simply use the values of
objective as fitness values and sort
chromosomes according to these values.
The parameters of the genetic algorithm are set
as follows:
18
Minimization of Ackley's Function
The parameters of the genetic algorithm are set
as follows:
pop_ size: 10
Maxgen: 1000
Pm: 0.1
Pc: 0.3
19
Minimization of Ackley's Function
20
Minimization of Ackley's Function
Table 1. Initial Population of 10 Random Chromosomes
21
Minimization of Ackley's Function
Table 2. The corresponding fitness function values
This means that the chromosomes v2 , v6 , v8,
and v9 were selected for crossover. The
offspring were generated as follows:
22
Minimization of Ackley's Function
This means that the chromosomes v2 , v6 , v8,
and v9 were selected for crossover. The
offspring were generated as follows:
23
Minimization of Ackley's Function
The mutation then is performed. Because there
are a total 2 x 10 = 20 genes in whole
population, we generate a sequence of random
numbers rk (k = 1, ... , 20) from the range [0, 1].
The corresponding gene to be mutated is:
24
Minimization of Ackley's Function
bit_pos chrom_num variable random_num
11 6 x1 0.081393
11 6 -4.068506 -0.959655
The fitness value for each offspring is as
follows:
25
Minimization of Ackley's Function
The best 10 chromosomes among parents and
offspring form a new population as follows:
26
Minimization of Ackley's Function
The corresponding fitness values of variables [x1, x2] are
as follows:
27
Minimization of Ackley's Function
Now we just completed one iteration of the genetic
procedure (one generation). At the 1000th generation,
we have the following chromosomes:
The fitness value is f(x1*, x2*) = -0.005456.
28
Minimization of Ackley's Function
29
Ackley's Function
Fig. 3. Contour plot of Ackley’s Function
30
Contour Plot of the Cost Function
Fig. 4. Variation of fitness value with generation.
Xopt=(0.6196×10-4, -0.1231E-04), Fmin=1.7879×10-4,
31
Contour Plot of the Cost Function
Fig. 5. Scattering of the initial population.
32
Contour Plot of the Cost Function
Fig. 6. Scattering of at the 50th generation.
33
Contour Plot of the Cost Function
Fig. 7. Scattering of at the 100th generation.
34
Contour Plot of the Cost Function
Fig. 8. Scattering of at the 150th generation.
35
Contour Plot of the Cost Function
Fig. 9. Scattering of at the 200th generation.
Nonlinear programming (or constrainted
optimization) deals with the problem of optimizing an
objective function in the presence of equality and/or
inequality constraints.
Nonlinear programming is an extremely important
tool used in almost every area of engineering,
operations research, and mathematics because
many practical problems cannot be successfully
modeled as a linear program.
36
Nonlinear Programming
The general nonlinear programming may be written
as follows:
where f, gi and hi are real valued functions defined
on En, X is a subset of En, and x is an n-dimensional
real vector with components x1, x2, ... ,xn.
37
Nonlinear Programming
The above problem must be solved for the values of
the variables x1, x2,…,xn that satisfy the restrictions
and meanwhile minimize the function f.
The function f is usually called the objective function
or criterion function.
Each of the constraints gi(x) 0 is called an
inequality constraint.
Each of the constraints hi(x) = 0 is called an equality
constraint.
38
Nonlinear Programming
The set X might typically include lower and upper
bounds on the variables, which is usually called
domain constraint.
A vector x ∈ X satisfying all the constraints is called a
feasible solution to the problem.
The collection of all such solutions forms the feasible
region. The nonlinear programming problem then is
to find a feasible point 𝑥 such that f(x) f(𝑥 ) for each
feasible point 𝑥.
39
Nonlinear Programming
Such a point is called an optimal solution. Unlike linear
programming problems, the conventional solution
methods for nonlinear programming are very complex
and not very efficient.
In the past few years, there has been a growing effort
to apply genetic algorithms to the nonlinear
programming problem. This lecture notes help us how
to solve the nonlinear programming problem with
genetic algorithms in general.
40
Nonlinear Programming
The central problem for applying genetic algorithms
to the constrained optimization is how to handle
constraints because genetic operators used to
manipulate the chromosomes often yield infeasible
offspring.
Recently, several techniques have been proposed to
handle constraints with genetic algorithms
Michalewicz has published a very good survey on
this problem.
41
Nonlinear Programming
The existing techniques can be roughly classified as
follows:
Rejecting strategy
Repairing strategy
Modifying genetic operators strategy
Penalizing strategy
Each of these strategies have advantages and
disadvantages.
42
Nonlinear Programming
Rejecting strategy discards all infeasible
chromosomes created throughout the evolutionary
process.
This is a popular option in many genetic algorithms.
The method may work reasonably well when the
feasible search space is convex, and it creates a
reasonable part of the whole search space.
However, such an approach has serious limitations.
43
Rejecting Strategy
For example, for many constrained optimization
problems where the initial population consists of
infeasible chromosomes only, it might be
essential to improve them.
Moreover, quite often the system can reach the
optimum more easily if it is possible to "cross" an
infeasible region (especially in nonconvex
feasible search spaces).
44
Rejecting Strategy
Repairing a chromosome involves taking an infeasible
chromosome and generating a feasible one through
some repairing procedure.
For many combinatorial optimization problems, it is
relatively easy to create a repairing procedure.
It has been shown that through an empirical test of
genetic algorithms performance on a diverse set of
constrained combinatorial optimization problems, that the
repair strategy did indeed surpass other strategies in
both speed and performance.
45
Repairing Strategy
Repairing strategy depends on the existence of a
deterministic repair procedure to convert an infeasible
offspring into a feasible one.
The weakness of the method is in its problem
dependence.
For each particular problem, a specific repair algorithm
should be designed.
For some problems, the process of repairing infeasible
chromosomes might be as complex as solving the
original problem.
46
Repairing Strategy
One reasonable approach for dealing with the issue of
feasibility is to develop problem-specific representation and
specialized genetic operators to maintain the feasibility of
chromosomes.
Such systems are often much more reliable than any other
genetic algorithms based on the penalty approach. Many
users use problem-specific representation and specialized
operators in building very successful genetic algorithms in
many areas. However, the genetic search of this approach
is confined within the feasible region.
47
Modifying Genetic Operator Strategy
These strategies above have the advantage that
they never generate infeasible solutions but have the
disadvantage that they consider no points outside
the feasible regions.
For highly constrained problem, infeasible solutions
may take a relatively big portion of the population.
In such case, feasible solutions may be difficult to be
found if we just confine genetic search within
feasible regions.
48
Penalty Strategy
It has been suggested that constraint management
techniques allowing movement through infeasible
regions of the search space tend to yield more rapid
optimization and produce better final solutions than
do approaches limiting search trajectories only to
feasible regions of the search space.
The penalizing strategy is such kind of techniques
proposed to consider infeasible solutions in genetic
search.
49
Penalty Strategy
The penalty technique is perhaps the most common
technique used to handle infeasible solutions in the
genetic algorithms for constrained optimization
problems.
In essence, this technique transforms the
constrained problem into an unconstrained problem
by penalizing infeasible solutions, in which a penalty
term is added to the objective function for any
violation of the constraints.
50
Penalty Function
The basic idea of the penalty technique is borrowed
from conventional optimization.
It is a nature question: is there any difference when
we use the penalty method in conventional
optimization and in genetic algorithms?
In conventional optimization, the penalty technique is
used to generate a sequence of infeasible points
whose limit is an optimal solution to the original
problem.
51
Penalty Function
The major concern is how to choose a proper
value of penalty so as to speed convergence and
avoid premature termination.
In genetic algorithms, the penalty technique is
used to keep a certain amount of infeasible
solutions in each generation so as to enforce
genetic search towards an optimal solution from
both sides of feasible and infeasible regions.
52
Penalty Function
We do not simply reject the infeasible solutions in each
generation because some may provide much more
useful information about optimal solution than some
feasible solutions.
The major concern is how to determine the penalty term
so as to strike a balance between the information
protection (keeping some infeasible solutions) and the
selective pressure (rejecting some infeasible solutions),
and cancelled both under-penalty and overpenalty.
53
Penalty Function
In general, solution space contains two parts: feasible
area and infeasible area.
We do not make any assumption about these
subspaces; in particular, they need be neither convex
nor connected as shown in Figure 10.
Handling infeasible chromosomes is insignificant.
From the figure we can know that infeasible solution b is
much near to optima a than infeasible solution d and
feasiblesolution c.
54
Penalty Function
55
Penalty Function
Fig. 10. Solution space: feasible area and infeasible area.
We may hope to give less penalty to b than to d
even though it is a little farther from the feasible area
than d.
We also can believe that b contains much more
information about optima than c even though it is
infeasible.
However, we have no a priori knowledge about
optima, so generally it is very hard for us to judge
which one is better than others.
56
Penalty Function
The main issue of penalty strategy is how to design the
penalty function 𝑝(𝑥) which can effectively guide genetic
search toward the promising area of solution space.
The relationship between infeasible chromosome and the
feasible part of the search space plays a significant role in
penalizing infeasible chromosomes:
The penalty value corresponds to the "amount" of its
infeasibility under some measurement.
There is no general guideline on designing penalty function,
and constructing an efficient penalty function is quite problem-
dependent.
57
Penalty Function
Penalty techniques transform the constrained problem into an
unconstrained problem by penalizing infeasible solutions.
In general, there are two possible ways to construct the
evaluation function with penalty term.
1) One is to take the addition form expressed as follows:
𝑒𝑣𝑎𝑙(𝑥) = 𝑓(𝑥) + 𝑝(𝑥)
where 𝑥 represents a chromosome, 𝑓(𝑥) is the objective
function of problem, and 𝑝(𝑥) is the penalty term.
58
Evaluation Function with Penalty Term
For maximization problems, we usually require that:
𝑝 𝑥 = 0 if 𝑥 is feasible
𝑝 𝑥 < 0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Let l p(x) lmax and I f(x) lmin be the maximum of l p(x) l and
minimum of l f(x) l among infeasible solutions in current
population, respectively. We also require that
l p(x) lmax ≤ I f(x) lmin
to avoid negative fitness value. 59
Evaluation Function with Penalty Term
For minimization problems, we usually require that:
𝑝 𝑥 = 0 if 𝑥 is feasible
𝑝 𝑥 > 0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
The second way is to take the multiplication form
expressed as follows:
60
Evaluation Function with Penalty Term
𝑒𝑣𝑎𝑙(𝑥) = 𝑓(𝑥) + 𝑝(𝑥)
In this case, for maximization problems we require that:
𝑝 𝑥 = 1 if 𝑥 is feasible
0≤ 𝑝 𝑥 ≤ 1 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
and for minimization problems we require that
61
Evaluation Function with Penalty Term
𝑝 𝑥 = 1 if 𝑥 is feasible
𝑝 𝑥 > 1 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Note that for the minimization problems, the fitter
chromosome has the lower value of eval(x).
For some selection methods, it is required to
transform the objective values into fitness values
in such way that the fitter one has the larger
fitness value.
62
Evaluation Function with Penalty Term
Find the optimal value of the following
constrained function.
z=5-(x-2)2-2(y-1)2
x+4y=3
0 x, y 5
63
Example
64
Constrained Optimization (Matlab Code)
clear;clc;
% Step 1 : Initialization
a=0;b=10;n=2;rh=0.6667;
G=100;pm=0.001;pc=.6;N=1000;fmin
=[];fave=[];fmax=[];maxfit=0;
x=rand(N,n);
for k=1:n
x(:,k)=linmap(x(:,k),a,b); %
convert chromosome to real
number in a range from a to b
end
65
Constrained Optimization (Matlab Code)
for g=1:G
fprintf('g:%.0f\n',g);
% Step 2 : Selection
f=fitval3(x(:,1),x(:,2),rh);
s=selpop(x,f);
% Step 3 : Crossover
c=artxover(s,pc);
% Step 4 : Mutation
x=pertmutate(c,pm,a,b);
[maxfit x]=elit(x(:,1),x(:,2),maxfit,rh);
f=fitval3(x(:,1),x(:,2),rh);
fmin=[fmin maxfit];
fave=[fave mean(f)];
fmax=[fmax max(f)];
end % end the generation
66
Constrained Optimization (Matlab Code)
g=1:G;
plot(g,fmax,'r',g,fave,'b');
xlabel(‘Generation');ylabel(‘Fitness
Value');
axis([1 100 0 2.1])
legend('Max','Ave','location','best');lege
nd boxoff;
f=fun3(x(:,1),x(:,2));
[fmx ind]=max(f);
optx=x(ind(1),:)
yoptx=fun3(optx(:,1),optx(:,2))
67
Constrained Optimization (Matlab Code)
function f=fitval3(x,y,rh)
f=[];n=length(x);
z=fun3(x,y);
h=x+4*y;
bh=3;
for k=1:n
if (h(k)~=bh)
f(k)=z(k)-rh*h(k);
else
f(k)=z(k);
end
end
68
Constrained Optimization (Matlab Code)
function z=fun3(x,y)
z=5-(x-2).^2-2*(y-1).^2;
69
Convergence of Constrained Optimization