Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Constrained and Unconstrained Optimization
Carlos Hurtado
Department of EconomicsUniversity of Illinois at Urbana-Champaign
Oct 10th, 2017
C. Hurtado (UIUC - Economics) Numerical Methods
On the Agenda
1 Numerical Optimization
2 Minimization of Scalar Function
3 Golden Search
4 Newton’s Method
5 Polytope Method
6 Newton’s Method Reloaded
7 Quasi-Newton Methods
8 Non-linear Least-Square
9 Constrained Optimization
C. Hurtado (UIUC - Economics) Numerical Methods
Numerical Optimization
On the Agenda
1 Numerical Optimization
2 Minimization of Scalar Function
3 Golden Search
4 Newton’s Method
5 Polytope Method
6 Newton’s Method Reloaded
7 Quasi-Newton Methods
8 Non-linear Least-Square
9 Constrained Optimization
C. Hurtado (UIUC - Economics) Numerical Methods
Numerical Optimization
Numerical Optimization
I In some economic problems, we would like to find the value thatmaximizes or minimizes a function.
I We are going to focus on the minimization problems:
minx
f (x)
ormin
xf (x) s.t. x ∈ B
I Notice that minimization and maximization are equivalent because wecan maximize f (x) by minimizing −f (x).
C. Hurtado (UIUC - Economics) Numerical Methods 1 / 27
Numerical Optimization
Numerical Optimization
I We want to solve this problem in a reasonable time
I Most often, the CPU time is dominated by the cost of evaluatingf (x).
I We will like to keep the number of evaluations of f (x) as small aspossible.
I There are two types of objectives:
- Finding global minimum: The lowest possible value of the functionover the range.
- Finding a local minimum: Smallest value within a boundedneighborhood.
C. Hurtado (UIUC - Economics) Numerical Methods 2 / 27
Minimization of Scalar Function
On the Agenda
1 Numerical Optimization
2 Minimization of Scalar Function
3 Golden Search
4 Newton’s Method
5 Polytope Method
6 Newton’s Method Reloaded
7 Quasi-Newton Methods
8 Non-linear Least-Square
9 Constrained Optimization
C. Hurtado (UIUC - Economics) Numerical Methods
Minimization of Scalar Function
Bracketing Method
I We would like to find the minimum of a scalar funciton f (x), suchthat f : R→ R.
I The Bracketing method is a direct method that does not usecurvature or local approximation
I We start with a bracket:(a, b, c) s.t. a < b < c and f (a) > f (b) and f (c) > f (b)
I We will search for the minimum by selecting a trial point in one of theintervals.
I If c − b > b − a, take d = b+c2 .
I Else, if c − b ≤ b − a, take d = a+b2
I If f (d) > f (b), there is a new bracket (d , b, c) or (a, b, d).I If f (d) < f (b), there is a new bracket (a, d , c).I Continue until the distance between the extremes of the bracket is
small.C. Hurtado (UIUC - Economics) Numerical Methods 3 / 27
Minimization of Scalar Function
Bracketing Method
I We selected the new point using the mid point between the extremes,but what is the best location for the new point d?
a b d c
I One possibility is to minimize the size of the next search interval.I The next search interval will be either from a to d or from b to cI The proportion of the left interval is
w = b − ac − a
I The proportion of the new interval is
z = d − bc − a
C. Hurtado (UIUC - Economics) Numerical Methods 4 / 27
Golden Search
On the Agenda
1 Numerical Optimization
2 Minimization of Scalar Function
3 Golden Search
4 Newton’s Method
5 Polytope Method
6 Newton’s Method Reloaded
7 Quasi-Newton Methods
8 Non-linear Least-Square
9 Constrained Optimization
C. Hurtado (UIUC - Economics) Numerical Methods
Golden Search
Golden Search
I The proportion of the new segment will be
1− w = c − bc − a
orw + z = d − a
c − a
I Moreover, if d is the new candidate to minimize the function,z
1− w =d−bc−ac−bc−a
= d − bc − b
I Ideally we will havez = 1− 2w
and z1− w = w
C. Hurtado (UIUC - Economics) Numerical Methods 5 / 27
Golden Search
Golden Search
I The previous equations imply w2 − 3w + 1 = 0, or
w = 3−√
52 ' 0.38197
I In mathematics, the golden ration is φ = 1+√
52
I This goes back to PythagorasI Notice that 1− 1
φ = 3−√
52
I The Golden Search algorithm uses the golden ratio to set the newpoint (using a weighed average)
I This reduces the bracketing by about 40%.I The performance is independent of the function that is being
minimized.C. Hurtado (UIUC - Economics) Numerical Methods 6 / 27
Golden Search
Golden Search
I Sometimes the performance can be improve substantially when a localapproximation is used.
I When we use a combination of local approximation and golden searchwe get a method called Brent.
I Let us suppose that we want to minimize y = x(x − 2)(x + 2)2
C. Hurtado (UIUC - Economics) Numerical Methods 7 / 27
Golden Search
Golden Search
I Sometimes the performance can be improve substantially when a localapproximation is used.
I When we use a combination of local approximation and golden searchwe get a method called Brent.
I Let us suppose that we want to minimize y = x(x − 2)(x + 2)2
2 1 0 1 2
x
10
5
0
5
10
15
20
25
30
y = x(x - 2)(x + 2)2
C. Hurtado (UIUC - Economics) Numerical Methods 7 / 27
Golden Search
Golden Search
I We can use the minimize scalar function from the scipy .optimizemodule.
1 >>> def f(x):2 >>> .... return (x - 2) * x * (x + 2) **23 >>> from scipy. optimize import minimize_scalar4 >>> opt_res = minimize_scalar (f)5 >>> print opt_res .x6 1.280776404037 >>> opt_res = minimize_scalar (f, method =’golden ’)8 >>> print opt_res .x9 1.28077640147
10 >>> opt_res = minimize_scalar (f, bounds =(-3, -1), method =’bounded ’)
11 >>> print opt_res .x12 -2.0000002026
C. Hurtado (UIUC - Economics) Numerical Methods 8 / 27
Newton’s Method
On the Agenda
1 Numerical Optimization
2 Minimization of Scalar Function
3 Golden Search
4 Newton’s Method
5 Polytope Method
6 Newton’s Method Reloaded
7 Quasi-Newton Methods
8 Non-linear Least-Square
9 Constrained Optimization
C. Hurtado (UIUC - Economics) Numerical Methods
Newton’s Method
Newton’s Method
I Let us assume that the function f (x) : R→ R is infinitelydifferentiable
I We would like to find x∗ such that f (x∗) ≤ f (x) for all x ∈ R.I Idea: Use a Taylor approximation of the function f (x).I The polynomial approximation of order two around a is:
p(x) = f (a) + f ′(a)(x − a) + 12 f ′′(a)(x − a)2
I To find an optimal value for p(x) we use the FOC:
p′(x) = f ′(a) + (x − a)f ′′(a) = 0
I Hence,x = a − f ′(a)
f ′′(a)C. Hurtado (UIUC - Economics) Numerical Methods 9 / 27
Newton’s Method
Newton’s Method
I The Newton’s method starts with a given x1.I To compute the next candidate to minimize the function we use
xn+1 = xn −f ′(xn)f ′′(xn)
I Do this until|xn+1 − xn| < ε
and|f ′(xn+1)| < ε
I Newton’s method is very fast (quadratic convergence).I Theorem:
|xn+1 − xn| <|f ′′′(x∗)|2|f ′′(x∗)| |xn − x∗|2
C. Hurtado (UIUC - Economics) Numerical Methods 10 / 27
Newton’s Method
Newton’s Method
I A Quick Detour: Root FindingI Consider the problem of finding zeros for p(x)I Assume that you know a point a where p(a) is positive and a point b
where p(b) is negative.I If p(x) is continuous between a and b, we could approximate as:
p(x) ' p(a) + (x − a)p′(a)
I The approximate zero is then:
x = a − p(a)p′(a)
I The idea is the same as before. Newton’s method also works forfinding roots.
C. Hurtado (UIUC - Economics) Numerical Methods 11 / 27
Newton’s Method
Newton’s Method
I There are several issues with the Newton’s method:
- Iteration point is stationary
- Starting point enter a cycle
- Derivative does not exist
- Discontinuous derivative
I Newton’s method finds a local optimum, but not a global optimum.
C. Hurtado (UIUC - Economics) Numerical Methods 12 / 27
Polytope Method
On the Agenda
1 Numerical Optimization
2 Minimization of Scalar Function
3 Golden Search
4 Newton’s Method
5 Polytope Method
6 Newton’s Method Reloaded
7 Quasi-Newton Methods
8 Non-linear Least-Square
9 Constrained Optimization
C. Hurtado (UIUC - Economics) Numerical Methods
Polytope Method
Polytope Method
I The Polytope (a.k.a. Nelder-Meade) Method is a direct method tofind the solution of
minx
f (x)
where f : Rn → R.I We start with the points x1, x2 and x3, such that
f (x1) ≥ f (x2) ≥ f (x3)I Using the midpoint between x2 and x3, we reflect x1 to the point y1
I Check if f (y1) < f (x1).I If true, you have a new polytope.I If not, try x2. If not, try x3
I If nothing works, shrink the polytope toward x3.I Stop when the size of the polytope is smaller then ε
C. Hurtado (UIUC - Economics) Numerical Methods 13 / 27
Polytope Method
Polytope Method
I Let us consider the following function:f (x0, x1) = (1− x0)2 + 100(x1 − x2
0 )2
I The function looks like:
C. Hurtado (UIUC - Economics) Numerical Methods 14 / 27
Polytope Method
Polytope Method
I Let us consider the following function:
f (x0, x1) = (1− x0)2 + 100(x1 − x20 )2
I The function looks like:
x0
2.01.5
1.00.5
0.00.5
1.01.5
2.0
x1
10
12
34
y
0
500
1000
1500
2000
2500
3000
x0
2.01.5
1.00.5
0.00.5
1.01.5
2.0
x1
10
12
34
y
0
500
1000
1500
2000
2500
3000
C. Hurtado (UIUC - Economics) Numerical Methods 14 / 27
Polytope Method
Polytope Method
I Let us consider the following function:
f (x0, x1) = (1− x0)2 + 100(x1 − x20 )2
I The function looks like:
1.0 0.5 0.0 0.5 1.0 1.50.5
0.0
0.5
1.0
1.5
C. Hurtado (UIUC - Economics) Numerical Methods 14 / 27
Polytope Method
Polytope Method
I In python we can do:1 >>> def f2(x):2 .... return (1-x[0]) **2 + 100*(x[1]-x [0]**2) **23 >>> from scipy. optimize import fmin4 >>> opt=fmin(func=f2 ,x0 =[0 ,0])5 >>> print (opt)6 [ 1.00000439 1.00001064]
C. Hurtado (UIUC - Economics) Numerical Methods 15 / 27
Newton’s Method Reloaded
On the Agenda
1 Numerical Optimization
2 Minimization of Scalar Function
3 Golden Search
4 Newton’s Method
5 Polytope Method
6 Newton’s Method Reloaded
7 Quasi-Newton Methods
8 Non-linear Least-Square
9 Constrained Optimization
C. Hurtado (UIUC - Economics) Numerical Methods
Newton’s Method Reloaded
Newton’s Method
I What can we do if we want to use Newton’s Method for a functionf : Rn → R?
I We can use a quadratic approximation at a′ = (a1, · · · , an):
p(x) = f (a) +∇f (a)(x − a) + 12(x − a)′H(a)(x − a)
where x ′ = (x1, · · · , xn).
I The gradient ∇f (x) is a multi-variable generalization of thederivative: ∇f (x)′ =
(∂f (x)∂x1
, · · · , ∂f (x)∂xn
)
C. Hurtado (UIUC - Economics) Numerical Methods 16 / 27
Newton’s Method Reloaded
Newton’s Method
I The hessian matrix H(x) is a square matrix of second-order partialderivatives that describes the local curvature of a function of manyvariables.
H(x) =
∂2f (x)∂x2
1
∂2f (x)∂x1∂x2
· · · ∂2f (x)∂x1∂xn
∂2f (x)∂x2∂x1
∂2f (x)∂x2
2· · · ∂2f (x)
∂x2∂xn...
... . . . ...∂2f (x)∂xn∂x1
∂2f (x)∂xn∂x2
· · · ∂2f (x)∂x2
n
I The FOC is:
∇p = ∇f (a) + H(a)(x − a) = 0
I We can solve this to get:x = a − H(a)−1∇f (a)
C. Hurtado (UIUC - Economics) Numerical Methods 17 / 27
Newton’s Method Reloaded
Newton’s Method
I Following the same logic as in the one dimensional case:
xk+1 = xk − H(xk)−1∇f (xk)
I How do we compute H(xk)−1∇f (xk)?
I We can solve:
H(xk)−1∇f (xk) = s∇f (xk) = H(xk)s
I The search direction, s, is the solution of a system of equations (andwe know how to solve that!)
C. Hurtado (UIUC - Economics) Numerical Methods 18 / 27
Quasi-Newton Methods
On the Agenda
1 Numerical Optimization
2 Minimization of Scalar Function
3 Golden Search
4 Newton’s Method
5 Polytope Method
6 Newton’s Method Reloaded
7 Quasi-Newton Methods
8 Non-linear Least-Square
9 Constrained Optimization
C. Hurtado (UIUC - Economics) Numerical Methods
Quasi-Newton Methods
Quasi-Newton Methods
I For Newton’s method we need the Hessian of the function.
I If the Hessian is unavailable, the ”full” Newton’s method cannot beused
I Any method that replaces the Hessian with an approximation is aquasi-Newton method.
I One advantage of quasi-Newton methods is that the Hessian matrixdoes not need to be inverted.
I Newton’s method, require the Hessian to be inverted, which istypically implemented by solving a system of equations
I Quasi-Newton methods usually generate an estimate of the inversedirectly.
C. Hurtado (UIUC - Economics) Numerical Methods 19 / 27
Quasi-Newton Methods
Quasi-Newton Methods
I The Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm, theHessian matrix is approximated using updates specified by gradientevaluations (or approximate gradient evaluations).
I In python:1 >>> import numpy as np2 >>> from scipy. optimize import fmin_bfgs3 >>> def f(x):4 ... return (1-x[0]) **2 + 100*(x[1]-x [0]**2) **25 >>> opt = fmin_bfgs (f, x0 =[0.5 ,0.5])
I Using the gradient we can improve the approximation1 >>> def g r a d i e n t ( x ) :2 . . . r e t u r n np . a r r a y (( −2∗(1 − x [ 0 ] ) − 100∗4∗ x [ 0 ] ∗ ( x [ 1 ] − x [ 0 ] ∗ ∗ 2 ) , 200∗( x [ 1 ] − x
[ 0 ] ∗ ∗ 2 ) ) )3 >>> opt2 = f m i n b f g s ( f , x0 =[10 ,10 ] , f p r i m e=g r a d i e n t )
C. Hurtado (UIUC - Economics) Numerical Methods 20 / 27
Quasi-Newton Methods
Quasi-Newton Methods
I One of the methods that requires the fewest function calls (thereforevery fast) is the Newton-Conjugate-Gradient (NCG).
I The method uses a conjugate gradient algorithm to (approximately)invert the local Hessian.
I If the Hessian is positive definite then the local minimum of thisfunction can be found by setting the gradient of the quadratic form tozero
I In python1 >>> from scipy. optimize import fmin_ncg2 >>> opt3= fmin_ncg (f,x0 =[10 ,10] , fprime = gradient )
C. Hurtado (UIUC - Economics) Numerical Methods 21 / 27
Non-linear Least-Square
On the Agenda
1 Numerical Optimization
2 Minimization of Scalar Function
3 Golden Search
4 Newton’s Method
5 Polytope Method
6 Newton’s Method Reloaded
7 Quasi-Newton Methods
8 Non-linear Least-Square
9 Constrained Optimization
C. Hurtado (UIUC - Economics) Numerical Methods
Non-linear Least-Square
Non-linear Least-Square
I suppose it is desired to fit a set of data {xi , yi} to a model,y = f (x ; p) where p is a vector of parameters for the model thatneed to be found.
I A common method for determining which parameter vector gives thebest fit to the data is to minimize the sum of squares errors. (why?)
I The error is usually defined for each observed data-point as:
ei (yi , xi ; p) = ‖yi − f (xi ; p)‖
I The sum of the square of the errors is:
S (p; x , y) =N∑
i=1e2
i (yi , xi ; p)
C. Hurtado (UIUC - Economics) Numerical Methods 22 / 27
Non-linear Least-Square
Non-linear Least-Square
I Suppose that we model some populaton data at several times.
yi = f (ti ; (A, b)) = Aebt
I The parameters A and b are unknown to the economist.I We would like to minimize the square of the error to approximate the
data
C. Hurtado (UIUC - Economics) Numerical Methods 23 / 27
Non-linear Least-Square
Non-linear Least-Square
I Suppose that we model some populaton data at several times.
yi = f (ti ; (A, b)) = Aebt
I The parameters A and b are unknown to the economist.
I We would like to minimize the square of the error to approximate thedata
C. Hurtado (UIUC - Economics) Numerical Methods 23 / 27
Constrained Optimization
On the Agenda
1 Numerical Optimization
2 Minimization of Scalar Function
3 Golden Search
4 Newton’s Method
5 Polytope Method
6 Newton’s Method Reloaded
7 Quasi-Newton Methods
8 Non-linear Least-Square
9 Constrained Optimization
C. Hurtado (UIUC - Economics) Numerical Methods
Constrained Optimization
Constrained Optimization
I Let us find the minimum of a scalar function subject to constrains.
minx∈Rn
f (x) s.t. g(x) = a and h(x) ≥ b
I Here we have g : Rn → Rm and h : Rn → Rk .
I Notice that we can re-write the problem as an unconstrained version:
minx∈Rn
f (x) + 12p[ m∑
i=1(gi (x)− ai )2
]+
k∑j=1
max {0, hj(x)− bj}
I For a ”very large” value of p, the constrain needs to be satisfied(penalty method).
C. Hurtado (UIUC - Economics) Numerical Methods 24 / 27
Constrained Optimization
Constrained Optimization
I If the objective function is quadratic, the optimization problem lookslike
minx∈Rn
q(x) = 12x ′Gx + x ′c s.t. g(x) = a and h(x) ≥ b
I The structure of this type of problems can be efficiently exploited.
I This form the basis for Augmented Lagrangian and SequentialQuadratic Programming problems
C. Hurtado (UIUC - Economics) Numerical Methods 25 / 27
Constrained Optimization
Constrained Optimization
I The Augmented Lagrangian Methods use a mix of the Lagrangianwith penalty method.
I The Sequential Quadratic Programming Algorithms (SQPA) solve theproblem by using Quadratic approximations of the Lagrangeanfunction.
I The SQPA is the analogous of Newton’s method for the case ofconstraints.
I How does the algorithm solve the problem? It is possible withextensions of simplex method, which we will not cover.
I The previous extensions can be solved with the BFGS algorithm
C. Hurtado (UIUC - Economics) Numerical Methods 26 / 27
Constrained Optimization
Constrained Optimization
I Let us consider the Utility Maximization problem of an agent withconstant elasticity of substitution (CES) utility function:
U(x , y) = (αxρ + (1− α) yρ)1ρ
I Denote by px and py the prices of goods x and y respectively.
I the constraint optimization problem for the consumer is:
maxx ,y
U(x , y ; ρ, α) subject to x ≥ 0, y ≥ 0 and pxx + py y = M
C. Hurtado (UIUC - Economics) Numerical Methods 27 / 27