26
CS 450: Numerical Anlaysis 1 Numerical Optimization University of Illinois at Urbana-Champaign 1 These slides have been drafted by Edgar Solomonik as lecture templates and supplementary material for the book “Scientic Computing: An Introductory Survey” by Michael T. Heath (slides).

RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

CS 450: Numerical Anlaysis1Numerical Optimization

University of Illinois at Urbana-Champaign

1These slides have been drafted by Edgar Solomonik as lecture templates and supplementarymaterial for the book “Scientific Computing: An Introductory Survey” by Michael T. Heath (slides).

Page 2: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Numerical Optimization� Our focus will be on continuous rather than combinatorial optimization:

minx

f(x) subject to g(x) = 0 and h(x) ≤ 0

� We consider linear, quadratic, and general nonlinear optimization problems:

Page 3: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Local Minima and Convexity� Without knowledge of the analytical form of the function, numerical

optimization methods at best achieve convergence to a local rather thanglobal minimum:

� A set is convex if it includes all points on any line, while a function is(strictly) convex if its (unique) local minimum is always a global minimum:

Page 4: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Existence of Local Minima� Level sets are all points for which f has a given value, sublevel sets are all

points for which the value of f is less than a given value:

� If there exists a closed and bounded sublevel set in the domain of feasiblepoints, then f has has a global minimum in that set:

Page 5: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Optimality Conditions� If x is an interior point in the feasible domain and is a local minima,

∇f(x) =�

dfdx1

(x) · · · dfdxn

(x)�T

= 0 :

� Critical points x satisfy ∇f(x) = 0 and can be minima, maxima, or saddlepoints:

Page 6: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Hessian Matrix� To ascertain whether a critical point x, for which ∇f(x) = 0, is a local

minima, consider the Hessian matrix:

� If x∗ is a minima of f , then Hf (x∗) is positive semi-definite:

Page 7: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Optimality on Feasible Region Border� Given an equality constraint g(x) = 0, it is no longer necessarily the case

that ∇f(x∗) = 0. Instead, it may be that directions in which the gradientdecreases lead to points outside the feasible region:

∃λ ∈ Rn, −∇f(x∗) = JTg (x

∗)λ

� Such constrained minima are critical points of the Lagrangian functionL(x,λ) = f(x) + λTg(x), so they satisfy:

∇L(x∗,λ) =�∇f(x∗) + JT

g (x∗)λ

g(x∗)

�= 0

Page 8: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Sensitivity and Conditioning� The condition number of solving a nonlinear equations is 1/f �(x∗), however

for a minimizer x∗, we have f �(x∗) = 0, so conditioning of optimization isinherently bad:

� To analyze worst case error, consider how far we have to move from a root x∗

to perturb the function value by �:

Page 9: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Golden Section Search� Given bracket [a, b] with a unique minimum (f is unimodal on the interval),

golden section search considers consider points f(x1), f(x2),a < x1 < x2 < b and discards subinterval [a, x1] or [x2, b]:

� Since one point remains in the interval, golden section search selects x1 andx2 so one of them can be e�ectively reused in the next iteration:

Demo: Golden Section Proportions

Page 10: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Newton’s Method for Optimization� At each iteration, approximate function by quadratic and find minimum of

quadratic function:

� The new approximate guess will be given by xk+1 − xk = −f �(xk)/f ��(xk):

Demo: Newton’s Method in 1D

Page 11: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Successive Parabolic Interpolation� Interpolate f with a quadratic function at each step and find its minima:

� The convergence rate of the resulting method is roughly 1.324

Page 12: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Safeguarded 1D Optimization� Safeguarding can be done by bracketing via golden section search:

� Backtracking and step-size control:

Page 13: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

General Multidimensional Optimization� Direct search methods by simplex (Nelder-Mead):

� Steepest descent: find the minimizer in the direction of the negative gradient:

Demo: Nelder-Mead Method

Page 14: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Convergence of Steepest Descent� Steepest descent converges linearly with a constant that can be arbitrarily

close to 1:

� Given quadratic optimization problem f(x) = 12x

TAx+ cTx where A issymmetric positive definite, the error ek = xk − x∗ satisfies

Demo: Steepest Descent

Page 15: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Gradient Methods with Extrapolation� We can improve the constant in the linear rate of convergence of steepest

descent by leveraging extrapolation methods, which consider two previousiterates (maintain momentum in the direction xk − xk−1):

� The heavy ball method, which uses constant αk = α and βk = β, achievesbetter convergence than steepest descent:

Page 16: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Conjugate Gradient Method� The conjugate gradient method is capable of making the optimal choice of αk

and βk at each iteration of an extrapolation method:

� Parallel tangents implementation of the method proceeds as follows

Demo: Conjugate Gradient Method

Page 17: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Krylov Optimization� Conjugate Gradient finds the minimizer of f(x) = 1

2xTAx+ cTx within the

Krylov subspace of A:

Demo: Conjugate Gradient Parallel Tangents as Krylov Subspace Method

Page 18: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Newton’s Method� Newton’s method in n dimensions is given by finding minima of

n-dimensional quadratic approximation:

Demo: Newton’s Method in n dimensions

Page 19: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Quasi-Newton Methods� Quasi-Newton methods compute approximations to the Hessian at each step:

� The BFGS method is a secant update method, similar to Broyden’s method:

Page 20: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Nonlinear Least Squares� An important special case of multidimensional optimization is nonlinear least

squares, the problem of fitting a nonlinear function fx(t) so that fx(ti) ≈ yi:

� We can cast nonlinear least squares as an optimization problem and solve itby Newton’s method:

Page 21: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Gauss-Newton Method� The Hessian for nonlinear least squares problems has the form:

� The Gauss-Newtonmethod is Newton iteration with an approximate Hessian:

� The Levenberg-Marquardt method incorporates Tykhonov regularization intothe linear least squares problems within the Gauss-Newton method.

Page 22: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Constrained Optimization Problems� We now return to the general case of constrained optimization problems:

� Generally, we will seek to reduce constrained optimization problems to aseries of unconstrained optimization problems:

� sequential quadratic programming:

� penalty-based methods:

� active set methods:

Page 23: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Sequential Quadratic Programming� Sequential quadratic programming (SQP) corresponds to using Newton’s

method to solve the equality constrained optimality conditions, by findingcritical points of the Lagrangian function L(x,λ) = f(x) + λTg(x),

� At each iteration, SQP computes�xk+1

λk+1

�=

�xk

λk

�+

�skδk

�by solving

Demo: Sequential Quadratic Programming

Page 24: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Inequality Constrained Optimality Conditions� The Karush-Kuhn-Tucker (KKT) conditions hold for local minima of a problem

with equality and inequality constraints, the key conditions are� First, any minima x∗ must be a feasible point, so g(x∗) = 0 and h(x∗) ≥ 0.� We say the ith inequality constraint is active at a minima x∗ if hi(x

∗) = 0.� The collection of equality constraints and active inequality constraints q∗,

satisfies q∗(x∗) = 0.� The negative gradient of the objective function at the minima must be in the

row span of the Jacobian of this collection of constraints:

−∇f(x∗) = JTq∗(x∗)λ∗ where λ∗ are Lagrange multiplers of constraints in q∗.

� To use SQP for an inequality constrained optimization problem, consider ateach iteration an active set of constraints:

Page 25: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Penalty Functions� Alternatively, we can reduce constrained optimization problems to

unconstrained ones by modifying the objective function. Penalty functionsare e�ective for equality constraints g(x) = 0:

� The augmented Lagrangian function provides a more numerically robustapproach:

Page 26: RELATE - Numerical Optimization...Numerical Optimization Our focus will be on continuous rather than combinatorial optimization: min x f(x) subject to g(x) = 0 and h(x) ≤ 0 We consider

Barrier Functions� Barrier functions (interior point methods) provide an e�ective way of working

with inequality constraints h(x) ≤ 0: