Upload
non-existent
View
17
Download
0
Embed Size (px)
Citation preview
Scientific ComputingApproximations
Computer Arithmetic
Scientific Computing: An Introductory SurveyChapter 1 – Scientific Computing
Prof. Michael T. Heath
Department of Computer ScienceUniversity of Illinois at Urbana-Champaign
Copyright c© 2002. Reproduction permittedfor noncommercial, educational use only.
Michael T. Heath Scientific Computing 1 / 46
Scientific ComputingApproximations
Computer Arithmetic
Outline
1 Scientific Computing
2 Approximations
3 Computer Arithmetic
Michael T. Heath Scientific Computing 2 / 46
Scientific ComputingApproximations
Computer Arithmetic
IntroductionComputational ProblemsGeneral Strategy
Scientific Computing
What is scientific computing?
Design and analysis of algorithms for numerically solvingmathematical problems in science and engineeringTraditionally called numerical analysis
Distinguishing features of scientific computing
Deals with continuous quantitiesConsiders effects of approximations
Why scientific computing?
Simulation of natural phenomenaVirtual prototyping of engineering designs
Michael T. Heath Scientific Computing 3 / 46
Scientific ComputingApproximations
Computer Arithmetic
IntroductionComputational ProblemsGeneral Strategy
Well-Posed Problems
Problem is well-posed if solutionexistsis uniquedepends continuously on problem data
Otherwise, problem is ill-posed
Even if problem is well posed, solution may still besensitive to input data
Computational algorithm should not make sensitivity worse
Michael T. Heath Scientific Computing 4 / 46
Scientific ComputingApproximations
Computer Arithmetic
IntroductionComputational ProblemsGeneral Strategy
General Strategy
Replace difficult problem by easier one having same orclosely related solution
infinite → finitedifferential → algebraicnonlinear → linearcomplicated → simple
Solution obtained may only approximate that of originalproblem
Michael T. Heath Scientific Computing 5 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Sources of Approximation
Before computationmodelingempirical measurementsprevious computations
During computationtruncation or discretizationrounding
Accuracy of final result reflects all these
Uncertainty in input may be amplified by problem
Perturbations during computation may be amplified byalgorithm
Michael T. Heath Scientific Computing 6 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Example: Approximations
Computing surface area of Earth using formula A = 4πr2
involves several approximations
Earth is modeled as sphere, idealizing its true shape
Value for radius is based on empirical measurements andprevious computations
Value for π requires truncating infinite process
Values for input data and results of arithmetic operationsare rounded in computer
Michael T. Heath Scientific Computing 7 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Absolute Error and Relative Error
Absolute error : approximate value − true value
Relative error :absolute error
true value
Equivalently, approx value = (true value) × (1 + rel error)
True value usually unknown, so we estimate or bounderror rather than compute it exactly
Relative error often taken relative to approximate value,rather than (unknown) true value
Michael T. Heath Scientific Computing 8 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Data Error and Computational Error
Typical problem: compute value of function f : R → R forgiven argument
x = true value of inputf(x) = desired resultx = approximate (inexact) inputf = approximate function actually computed
Total error: f(x)− f(x) =
f(x)− f(x) + f(x)− f(x)computational error + propagated data error
Algorithm has no effect on propagated data error
Michael T. Heath Scientific Computing 9 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Truncation Error and Rounding Error
Truncation error : difference between true result (for actualinput) and result produced by given algorithm using exactarithmetic
Due to approximations such as truncating infinite series orterminating iterative sequence before convergence
Rounding error : difference between result produced bygiven algorithm using exact arithmetic and result producedby same algorithm using limited precision arithmetic
Due to inexact representation of real numbers andarithmetic operations upon them
Computational error is sum of truncation error androunding error, but one of these usually dominates
< interactive example >
Michael T. Heath Scientific Computing 10 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Example: Finite Difference Approximation
Error in finite difference approximation
f ′(x) ≈ f(x + h)− f(x)h
exhibits tradeoff between rounding error and truncationerror
Truncation error bounded by Mh/2, where M bounds|f ′′(t)| for t near x
Rounding error bounded by 2ε/h, where error in functionvalues bounded by ε
Total error minimized when h ≈ 2√
ε/M
Error increases for smaller h because of rounding errorand increases for larger h because of truncation error
Michael T. Heath Scientific Computing 11 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Example: Finite Difference Approximation
!"!!#
!"!!$
!"!!%
!"!!"
!"!&
!"!#
!"!$
!"!%
!""
!"!!&
!"!!#
!"!!$
!"!!%
!"!!"
!"!&
!"!#
!"!$
!"!%
!""
!"%
'()*+',-)
)../.
(.0123(,/1+)../. ./014,15+)../.
(/(36+)../.
Michael T. Heath Scientific Computing 12 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Forward and Backward Error
Suppose we want to compute y = f(x), where f : R → R,but obtain approximate value y
Forward error : ∆y = y − y
Backward error : ∆x = x− x, where f(x) = y
Michael T. Heath Scientific Computing 13 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Example: Forward and Backward Error
As approximation to y =√
2, y = 1.4 has absolute forwarderror
|∆y| = |y − y| = |1.4− 1.41421 . . . | ≈ 0.0142
or relative forward error of about 1 percent
Since√
1.96 = 1.4, absolute backward error is
|∆x| = |x− x| = |1.96− 2| = 0.04
or relative backward error of 2 percent
Michael T. Heath Scientific Computing 14 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Backward Error Analysis
Idea: approximate solution is exact solution to modifiedproblem
How much must original problem change to give resultactually obtained?
How much data error in input would explain all error incomputed result?
Approximate solution is good if it is exact solution to nearbyproblem
Backward error is often easier to estimate than forwarderror
Michael T. Heath Scientific Computing 15 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Example: Backward Error Analysis
Approximating cosine function f(x) = cos(x) by truncatingTaylor series after two terms gives
y = f(x) = 1− x2/2
Forward error is given by
∆y = y − y = f(x)− f(x) = 1− x2/2− cos(x)
To determine backward error, need value x such thatf(x) = f(x)
For cosine function, x = arccos(f(x)) = arccos(y)
Michael T. Heath Scientific Computing 16 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Example, continued
For x = 1,
y = f(1) = cos(1) ≈ 0.5403y = f(1) = 1− 12/2 = 0.5x = arccos(y) = arccos(0.5) ≈ 1.0472
Forward error: ∆y = y − y ≈ 0.5− 0.5403 = −0.0403
Backward error: ∆x = x− x ≈ 1.0472− 1 = 0.0472
Michael T. Heath Scientific Computing 17 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Sensitivity and Conditioning
Problem is insensitive, or well-conditioned, if relativechange in input causes similar relative change in solution
Problem is sensitive, or ill-conditioned, if relative change insolution can be much larger than that in input data
Condition number :
cond =|relative change in solution||relative change in input data|
=|[f(x)− f(x)]/f(x)|
|(x− x)/x|=|∆y/y||∆x/x|
Problem is sensitive, or ill-conditioned, if cond � 1
Michael T. Heath Scientific Computing 18 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Condition Number
Condition number is amplification factor relating relativeforward error to relative backward error∣∣∣∣ relative
forward error
∣∣∣∣ = cond ×∣∣∣∣ relativebackward error
∣∣∣∣Condition number usually is not known exactly and mayvary with input, so rough estimate or upper bound is usedfor cond, yielding∣∣∣∣ relative
forward error
∣∣∣∣ / cond ×∣∣∣∣ relativebackward error
∣∣∣∣Michael T. Heath Scientific Computing 19 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Example: Evaluating Function
Evaluating function f for approximate input x = x + ∆xinstead of true input x gives
Absolute forward error: f(x + ∆x)− f(x) ≈ f ′(x)∆x
Relative forward error:f(x + ∆x)− f(x)
f(x)≈ f ′(x)∆x
f(x)
Condition number: cond ≈∣∣∣∣f ′(x)∆x/f(x)
∆x/x
∣∣∣∣ = ∣∣∣∣xf ′(x)f(x)
∣∣∣∣Relative error in function value can be much larger orsmaller than that in input, depending on particular f and x
Michael T. Heath Scientific Computing 20 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Example: Sensitivity
Tangent function is sensitive for arguments near π/2
tan(1.57079) ≈ 1.58058× 105
tan(1.57078) ≈ 6.12490× 104
Relative change in output is quarter million times greaterthan relative change in input
For x = 1.57079, cond ≈ 2.48275× 105
Michael T. Heath Scientific Computing 21 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Stability
Algorithm is stable if result produced is relativelyinsensitive to perturbations during computation
Stability of algorithms is analogous to conditioning ofproblems
From point of view of backward error analysis, algorithm isstable if result produced is exact solution to nearbyproblem
For stable algorithm, effect of computational error is noworse than effect of small data error in input
Michael T. Heath Scientific Computing 22 / 46
Scientific ComputingApproximations
Computer Arithmetic
Sources of ApproximationError AnalysisSensitivity and Conditioning
Accuracy
Accuracy : closeness of computed solution to true solutionof problem
Stability alone does not guarantee accurate results
Accuracy depends on conditioning of problem as well asstability of algorithm
Inaccuracy can result from applying stable algorithm toill-conditioned problem or unstable algorithm towell-conditioned problem
Applying stable algorithm to well-conditioned problemyields accurate solution
Michael T. Heath Scientific Computing 23 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Floating-Point Numbers
Floating-point number system is characterized by fourintegers
β base or radixp precision[L,U ] exponent range
Number x is represented as
x = ±(
d0 +d1
β+
d2
β2+ · · ·+ dp−1
βp−1
)βE
where 0 ≤ di ≤ β − 1, i = 0, . . . , p− 1, and L ≤ E ≤ U
Michael T. Heath Scientific Computing 24 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Floating-Point Numbers, continued
Portions of floating-poing number designated as follows
exponent : E
mantissa : d0d1 · · · dp−1
fraction : d1d2 · · · dp−1
Sign, exponent, and mantissa are stored in separatefixed-width fields of each floating-point word
Michael T. Heath Scientific Computing 25 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Typical Floating-Point Systems
Parameters for typical floating-point systemssystem β p L U
IEEE SP 2 24 −126 127IEEE DP 2 53 −1022 1023Cray 2 48 −16383 16384HP calculator 10 12 −499 499IBM mainframe 16 6 −64 63
Most modern computers use binary (β = 2) arithmetic
IEEE floating-point systems are now almost universal indigital computers
Michael T. Heath Scientific Computing 26 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Normalization
Floating-point system is normalized if leading digit d0 isalways nonzero unless number represented is zero
In normalized systems, mantissa m of nonzerofloating-point number always satisfies 1 ≤ m < β
Reasons for normalizationrepresentation of each number uniqueno digits wasted on leading zerosleading bit need not be stored (in binary system)
Michael T. Heath Scientific Computing 27 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Properties of Floating-Point Systems
Floating-point number system is finite and discrete
Total number of normalized floating-point numbers is
2(β − 1)βp−1(U − L + 1) + 1
Smallest positive normalized number: UFL = βL
Largest floating-point number: OFL = βU+1(1− β−p)
Floating-point numbers equally spaced only betweensuccessive powers of β
Not all real numbers exactly representable; those that areare called machine numbers
Michael T. Heath Scientific Computing 28 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Example: Floating-Point System
Tick marks indicate all 25 numbers in floating-point systemhaving β = 2, p = 3, L = −1, and U = 1
OFL = (1.11)2 × 21 = (3.5)10UFL = (1.00)2 × 2−1 = (0.5)10
At sufficiently high magnification, all normalizedfloating-point systems look grainy and unequally spaced
< interactive example >
Michael T. Heath Scientific Computing 29 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Rounding Rules
If real number x is not exactly representable, then it isapproximated by “nearby” floating-point number fl(x)
This process is called rounding, and error introduced iscalled rounding error
Two commonly used rounding ruleschop : truncate base-β expansion of x after (p− 1)st digit;also called round toward zeroround to nearest : fl(x) is nearest floating-point number tox, using floating-point number whose last stored digit iseven in case of tie; also called round to even
Round to nearest is most accurate, and is default roundingrule in IEEE systems
< interactive example >
Michael T. Heath Scientific Computing 30 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Machine Precision
Accuracy of floating-point system characterized by unitroundoff (or machine precision or machine epsilon)denoted by εmach
With rounding by chopping, εmach = β1−p
With rounding to nearest, εmach = 12β1−p
Alternative definition is smallest number ε such thatfl(1 + ε) > 1
Maximum relative error in representing real number xwithin range of floating-point system is given by∣∣∣∣fl(x)− x
x
∣∣∣∣ ≤ εmach
Michael T. Heath Scientific Computing 31 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Machine Precision, continued
For toy system illustrated earlier
εmach = (0.01)2 = (0.25)10 with rounding by choppingεmach = (0.001)2 = (0.125)10 with rounding to nearest
For IEEE floating-point systems
εmach = 2−24 ≈ 10−7 in single precisionεmach = 2−53 ≈ 10−16 in double precision
So IEEE single and double precision systems have about 7and 16 decimal digits of precision, respectively
Michael T. Heath Scientific Computing 32 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Machine Precision, continued
Though both are “small,” unit roundoff εmach should not beconfused with underflow level UFL
Unit roundoff εmach is determined by number of digits inmantissa of floating-point system, whereas underflow levelUFL is determined by number of digits in exponent field
In all practical floating-point systems,
0 < UFL < εmach < OFL
Michael T. Heath Scientific Computing 33 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Subnormals and Gradual Underflow
Normalization causes gap around zero in floating-pointsystem
If leading digits are allowed to be zero, but only whenexponent is at its minimum value, then gap is “filled in” byadditional subnormal or denormalized floating-pointnumbers
Subnormals extend range of magnitudes representable,but have less precision than normalized numbers, and unitroundoff is no smaller
Augmented system exhibits gradual underflow
Michael T. Heath Scientific Computing 34 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Exceptional Values
IEEE floating-point standard provides special values toindicate two exceptional situations
Inf, which stands for “infinity,” results from dividing a finitenumber by zero, such as 1/0NaN, which stands for “not a number,” results fromundefined or indeterminate operations such as 0/0, 0 ∗ Inf,or Inf/Inf
Inf and NaN are implemented in IEEE arithmetic throughspecial reserved values of exponent field
Michael T. Heath Scientific Computing 35 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Floating-Point Arithmetic
Addition or subtraction : Shifting of mantissa to makeexponents match may cause loss of some digits of smallernumber, possibly all of them
Multiplication : Product of two p-digit mantissas contains upto 2p digits, so result may not be representable
Division : Quotient of two p-digit mantissas may containmore than p digits, such as nonterminating binaryexpansion of 1/10
Result of floating-point arithmetic operation may differ fromresult of corresponding real arithmetic operation on sameoperands
Michael T. Heath Scientific Computing 36 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Example: Floating-Point Arithmetic
Assume β = 10, p = 6
Let x = 1.92403× 102, y = 6.35782× 10−1
Floating-point addition gives x + y = 1.93039× 102,assuming rounding to nearest
Last two digits of y do not affect result, and with evensmaller exponent, y could have had no effect on result
Floating-point multiplication gives x ∗ y = 1.22326× 102,which discards half of digits of true product
Michael T. Heath Scientific Computing 37 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Floating-Point Arithmetic, continued
Real result may also fail to be representable because itsexponent is beyond available range
Overflow is usually more serious than underflow becausethere is no good approximation to arbitrarily largemagnitudes in floating-point system, whereas zero is oftenreasonable approximation for arbitrarily small magnitudes
On many computer systems overflow is fatal, but anunderflow may be silently set to zero
Michael T. Heath Scientific Computing 38 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Example: Summing Series
Infinite series∞∑
n=1
1n
has finite sum in floating-point arithmetic even though realseries is divergentPossible explanations
Partial sum eventually overflows1/n eventually underflowsPartial sum ceases to change once 1/n becomes negligiblerelative to partial sum
1n
< εmach
n−1∑k=1
1k
< interactive example >
Michael T. Heath Scientific Computing 39 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Floating-Point Arithmetic, continued
Ideally, x flop y = fl(x op y), i.e., floating-point arithmeticoperations produce correctly rounded results
Computers satisfying IEEE floating-point standard achievethis ideal as long as x op y is within range of floating-pointsystem
But some familiar laws of real arithmetic are notnecessarily valid in floating-point system
Floating-point addition and multiplication are commutativebut not associative
Example: if ε is positive floating-point number slightlysmaller than εmach, then (1 + ε) + ε = 1, but 1 + (ε + ε) > 1
Michael T. Heath Scientific Computing 40 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Cancellation
Subtraction between two p-digit numbers having same signand similar magnitudes yields result with fewer than pdigits, so it is usually exactly representable
Reason is that leading digits of two numbers cancel (i.e.,their difference is zero)
For example,
1.92403× 102 − 1.92275× 102 = 1.28000× 10−1
which is correct, and exactly representable, but has onlythree significant digits
Michael T. Heath Scientific Computing 41 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Cancellation, continued
Despite exactness of result, cancellation often impliesserious loss of information
Operands are often uncertain due to rounding or otherprevious errors, so relative uncertainty in difference may belarge
Example: if ε is positive floating-point number slightlysmaller than εmach, then (1 + ε)− (1− ε) = 1− 1 = 0 infloating-point arithmetic, which is correct for actualoperands of final subtraction, but true result of overallcomputation, 2ε, has been completely lost
Subtraction itself is not at fault: it merely signals loss ofinformation that had already occurred
Michael T. Heath Scientific Computing 42 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Cancellation, continued
Digits lost to cancellation are most significant, leadingdigits, whereas digits lost in rounding are least significant,trailing digits
Because of this effect, it is generally bad idea to computeany small quantity as difference of large quantities, sincerounding error is likely to dominate result
For example, summing alternating series, such as
ex = 1 + x +x2
2!+
x3
3!+ · · ·
for x < 0, may give disastrous results due to catastrophiccancellation
Michael T. Heath Scientific Computing 43 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Example: Cancellation
Total energy of helium atom is sum of kinetic and potentialenergies, which are computed separately and have oppositesigns, so suffer cancellation
Year Kinetic Potential Total1971 13.0 −14.0 −1.01977 12.76 −14.02 −1.261980 12.22 −14.35 −2.131985 12.28 −14.65 −2.371988 12.40 −14.84 −2.44
Although computed values for kinetic and potential energieschanged by only 6% or less, resulting estimate for total energychanged by 144%
Michael T. Heath Scientific Computing 44 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Example: Quadratic Formula
Two solutions of quadratic equation ax2 + bx + c = 0 aregiven by
x =−b±
√b2 − 4ac
2aNaive use of formula can suffer overflow, or underflow, orsevere cancellationRescaling coefficients avoids overflow or harmful underflowCancellation between −b and square root can be avoidedby computing one root using alternative formula
x =2c
−b∓√
b2 − 4ac
Cancellation inside square root cannot be easily avoidedwithout using higher precision
< interactive example >
Michael T. Heath Scientific Computing 45 / 46
Scientific ComputingApproximations
Computer Arithmetic
Floating-Point NumbersFloating-Point Arithmetic
Example: Standard Deviation
Mean and standard deviation of sequence xi, i = 1, . . . , n,are given by
x =1n
n∑i=1
xi and σ =
[1
n− 1
n∑i=1
(xi − x)2] 1
2
Mathematically equivalent formula
σ =
[1
n− 1
(n∑
i=1
x2i − nx2
)] 12
avoids making two passes through dataSingle cancellation at end of one-pass formula is moredamaging numerically than all cancellations in two-passformula combined
Michael T. Heath Scientific Computing 46 / 46
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Scientific Computing: An Introductory SurveyChapter 5 – Nonlinear Equations
Prof. Michael T. Heath
Department of Computer ScienceUniversity of Illinois at Urbana-Champaign
Copyright c© 2002. Reproduction permittedfor noncommercial, educational use only.
Michael T. Heath Scientific Computing 1 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Outline
1 Nonlinear Equations
2 Numerical Methods in One Dimension
3 Methods for Systems of Nonlinear Equations
Michael T. Heath Scientific Computing 2 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Nonlinear EquationsSolutions and SensitivityConvergence
Nonlinear Equations
Given function f , we seek value x for which
f(x) = 0
Solution x is root of equation, or zero of function f
So problem is known as root finding or zero finding
Michael T. Heath Scientific Computing 3 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Nonlinear EquationsSolutions and SensitivityConvergence
Nonlinear Equations
Two important cases
Single nonlinear equation in one unknown, where
f : R → R
Solution is scalar x for which f(x) = 0
System of n coupled nonlinear equations in n unknowns,where
f : Rn → Rn
Solution is vector x for which all components of f are zerosimultaneously, f(x) = 0
Michael T. Heath Scientific Computing 4 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Nonlinear EquationsSolutions and SensitivityConvergence
Examples: Nonlinear Equations
Example of nonlinear equation in one dimension
x2 − 4 sin(x) = 0
for which x = 1.9 is one approximate solution
Example of system of nonlinear equations in twodimensions
x21 − x2 + 0.25 = 0
−x1 + x22 + 0.25 = 0
for which x =[0.5 0.5
]T is solution vector
Michael T. Heath Scientific Computing 5 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Nonlinear EquationsSolutions and SensitivityConvergence
Existence and Uniqueness
Existence and uniqueness of solutions are morecomplicated for nonlinear equations than for linearequations
For function f : R → R, bracket is interval [a, b] for whichsign of f differs at endpoints
If f is continuous and sign(f(a)) 6= sign(f(b)), thenIntermediate Value Theorem implies there is x∗ ∈ [a, b]such that f(x∗) = 0
There is no simple analog for n dimensions
Michael T. Heath Scientific Computing 6 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Nonlinear EquationsSolutions and SensitivityConvergence
Examples: One Dimension
Nonlinear equations can have any number of solutions
exp(x) + 1 = 0 has no solution
exp(−x)− x = 0 has one solution
x2 − 4 sin(x) = 0 has two solutions
x3 + 6x2 + 11x− 6 = 0 has three solutions
sin(x) = 0 has infinitely many solutions
Michael T. Heath Scientific Computing 7 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Nonlinear EquationsSolutions and SensitivityConvergence
Example: Systems in Two Dimensionsx2
1 − x2 + γ = 0−x1 + x2
2 + γ = 0
Michael T. Heath Scientific Computing 8 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Nonlinear EquationsSolutions and SensitivityConvergence
Multiplicity
If f(x∗) = f ′(x∗) = f ′′(x∗) = · · · = f (m−1)(x∗) = 0 butf (m)(x∗) 6= 0 (i.e., mth derivative is lowest derivative of fthat does not vanish at x∗), then root x∗ has multiplicity m
If m = 1 (f(x∗) = 0 and f ′(x∗) 6= 0), then x∗ is simple root
Michael T. Heath Scientific Computing 9 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Nonlinear EquationsSolutions and SensitivityConvergence
Sensitivity and Conditioning
Conditioning of root finding problem is opposite to that forevaluating function
Absolute condition number of root finding problem for rootx∗ of f : R → R is 1/|f ′(x∗)|
Root is ill-conditioned if tangent line is nearly horizontal
In particular, multiple root (m > 1) is ill-conditioned
Absolute condition number of root finding problem for rootx∗ of f : Rn → Rn is ‖J−1
f (x∗)‖, where Jf is Jacobianmatrix of f ,
{Jf (x)}ij = ∂fi(x)/∂xj
Root is ill-conditioned if Jacobian matrix is nearly singular
Michael T. Heath Scientific Computing 10 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Nonlinear EquationsSolutions and SensitivityConvergence
Sensitivity and Conditioning
Michael T. Heath Scientific Computing 11 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Nonlinear EquationsSolutions and SensitivityConvergence
Sensitivity and Conditioning
What do we mean by approximate solution x to nonlinearsystem,
‖f(x)‖ ≈ 0 or ‖x− x∗‖ ≈ 0 ?
First corresponds to “small residual,” second measurescloseness to (usually unknown) true solution x∗
Solution criteria are not necessarily “small” simultaneously
Small residual implies accurate solution only if problem iswell-conditioned
Michael T. Heath Scientific Computing 12 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Nonlinear EquationsSolutions and SensitivityConvergence
Convergence Rate
For general iterative methods, define error at iteration k by
ek = xk − x∗
where xk is approximate solution and x∗ is true solution
For methods that maintain interval known to containsolution, rather than specific approximate value forsolution, take error to be length of interval containingsolution
Sequence converges with rate r if
limk→∞
‖ek+1‖‖ek‖r
= C
for some finite nonzero constant C
Michael T. Heath Scientific Computing 13 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Nonlinear EquationsSolutions and SensitivityConvergence
Convergence Rate, continued
Some particular cases of interest
r = 1: linear (C < 1)
r > 1: superlinear
r = 2: quadratic
Convergence Digits gainedrate per iterationlinear constantsuperlinear increasingquadratic double
Michael T. Heath Scientific Computing 14 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Interval Bisection Method
Bisection method begins with initial bracket and repeatedlyhalves its length until solution has been isolated as accuratelyas desired
while ((b− a) > tol) dom = a + (b− a)/2if sign(f(a)) = sign(f(m)) then
a = melse
b = mend
end
< interactive example >
Michael T. Heath Scientific Computing 15 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Example: Bisection Method
f(x) = x2 − 4 sin(x) = 0
a f(a) b f(b)1.000000 −2.365884 3.000000 8.4355201.000000 −2.365884 2.000000 0.3628101.500000 −1.739980 2.000000 0.3628101.750000 −0.873444 2.000000 0.3628101.875000 −0.300718 2.000000 0.3628101.875000 −0.300718 1.937500 0.0198491.906250 −0.143255 1.937500 0.0198491.921875 −0.062406 1.937500 0.0198491.929688 −0.021454 1.937500 0.0198491.933594 −0.000846 1.937500 0.0198491.933594 −0.000846 1.935547 0.0094911.933594 −0.000846 1.934570 0.0043201.933594 −0.000846 1.934082 0.0017361.933594 −0.000846 1.933838 0.000445Michael T. Heath Scientific Computing 16 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Bisection Method, continuedBisection method makes no use of magnitudes of functionvalues, only their signs
Bisection is certain to converge, but does so slowly
At each iteration, length of interval containing solutionreduced by half, convergence rate is linear, with r = 1 andC = 0.5
One bit of accuracy is gained in approximate solution foreach iteration of bisection
Given starting interval [a, b], length of interval after kiterations is (b− a)/2k, so achieving error tolerance of tolrequires ⌈
log2
(b− a
tol
)⌉iterations, regardless of function f involved
Michael T. Heath Scientific Computing 17 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Fixed-Point Problems
Fixed point of given function g : R → R is value x such that
x = g(x)
Many iterative methods for solving nonlinear equations usefixed-point iteration scheme of form
xk+1 = g(xk)
where fixed points for g are solutions for f(x) = 0
Also called functional iteration, since function g is appliedrepeatedly to initial starting value x0
For given equation f(x) = 0, there may be many equivalentfixed-point problems x = g(x) with different choices for g
Michael T. Heath Scientific Computing 18 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Example: Fixed-Point Problems
If f(x) = x2 − x− 2, then fixed points of each of functions
g(x) = x2 − 2
g(x) =√
x + 2
g(x) = 1 + 2/x
g(x) =x2 + 22x− 1
are solutions to equation f(x) = 0
Michael T. Heath Scientific Computing 19 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Example: Fixed-Point Problems
Michael T. Heath Scientific Computing 20 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Example: Fixed-Point Iteration
Michael T. Heath Scientific Computing 21 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Example: Fixed-Point Iteration
Michael T. Heath Scientific Computing 22 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Convergence of Fixed-Point Iteration
If x∗ = g(x∗) and |g′(x∗)| < 1, then there is intervalcontaining x∗ such that iteration
xk+1 = g(xk)
converges to x∗ if started within that interval
If |g′(x∗)| > 1, then iterative scheme diverges
Asymptotic convergence rate of fixed-point iteration isusually linear, with constant C = |g′(x∗)|
But if g′(x∗) = 0, then convergence rate is at leastquadratic
< interactive example >
Michael T. Heath Scientific Computing 23 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Newton’s Method
Truncated Taylor series
f(x + h) ≈ f(x) + f ′(x)h
is linear function of h approximating f near x
Replace nonlinear function f by this linear function, whosezero is h = −f(x)/f ′(x)
Zeros of original function and linear approximation are notidentical, so repeat process, giving Newton’s method
xk+1 = xk −f(xk)f ′(xk)
Michael T. Heath Scientific Computing 24 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Newton’s Method, continued
Newton’s method approximates nonlinear function f near xk bytangent line at f(xk)
Michael T. Heath Scientific Computing 25 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Example: Newton’s MethodUse Newton’s method to find root of
f(x) = x2 − 4 sin(x) = 0
Derivative isf ′(x) = 2x− 4 cos(x)
so iteration scheme is
xk+1 = xk −x2
k − 4 sin(xk)2xk − 4 cos(xk)
Taking x0 = 3 as starting value, we obtainx f(x) f ′(x) h
3.000000 8.435520 9.959970 −0.8469422.153058 1.294772 6.505771 −0.1990191.954039 0.108438 5.403795 −0.0200671.933972 0.001152 5.288919 −0.0002181.933754 0.000000 5.287670 0.000000
Michael T. Heath Scientific Computing 26 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Convergence of Newton’s Method
Newton’s method transforms nonlinear equation f(x) = 0into fixed-point problem x = g(x), where
g(x) = x− f(x)/f ′(x)
and henceg′(x) = f(x)f ′′(x)/(f ′(x))2
If x∗ is simple root (i.e., f(x∗) = 0 and f ′(x∗) 6= 0), theng′(x∗) = 0
Convergence rate of Newton’s method for simple root istherefore quadratic (r = 2)
But iterations must start close enough to root to converge
< interactive example >
Michael T. Heath Scientific Computing 27 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Newton’s Method, continued
For multiple root, convergence rate of Newton’s method is onlylinear, with constant C = 1− (1/m), where m is multiplicity
k f(x) = x2 − 1 f(x) = x2 − 2x + 10 2.0 2.01 1.25 1.52 1.025 1.253 1.0003 1.1254 1.00000005 1.06255 1.0 1.03125
Michael T. Heath Scientific Computing 28 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Secant Method
For each iteration, Newton’s method requires evaluation ofboth function and its derivative, which may be inconvenientor expensive
In secant method, derivative is approximated by finitedifference using two successive iterates, so iterationbecomes
xk+1 = xk − f(xk)xk − xk−1
f(xk)− f(xk−1)
Convergence rate of secant method is normallysuperlinear, with r ≈ 1.618
Michael T. Heath Scientific Computing 29 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Secant Method, continued
Secant method approximates nonlinear function f by secantline through previous two iterates
< interactive example >
Michael T. Heath Scientific Computing 30 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Example: Secant Method
Use secant method to find root of
f(x) = x2 − 4 sin(x) = 0
Taking x0 = 1 and x1 = 3 as starting guesses, we obtain
x f(x) h
1.000000 −2.3658843.000000 8.435520 −1.5619301.438070 −1.896774 0.2867351.724805 −0.977706 0.3050292.029833 0.534305 −0.1077891.922044 −0.061523 0.0111301.933174 −0.003064 0.0005831.933757 0.000019 −0.0000041.933754 0.000000 0.000000
Michael T. Heath Scientific Computing 31 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Higher-Degree Interpolation
Secant method uses linear interpolation to approximatefunction whose zero is sought
Higher convergence rate can be obtained by usinghigher-degree polynomial interpolation
For example, quadratic interpolation (Muller’s method) hassuperlinear convergence rate with r ≈ 1.839
Unfortunately, using higher degree polynomial also hasdisadvantages
interpolating polynomial may not have real rootsroots may not be easy to computechoice of root to use as next iterate may not be obvious
Michael T. Heath Scientific Computing 32 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Inverse Interpolation
Good alternative is inverse interpolation, where xk areinterpolated as function of yk = f(xk) by polynomial p(y),so next approximate solution is p(0)
Most commonly used for root finding is inverse quadraticinterpolation
Michael T. Heath Scientific Computing 33 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Inverse Quadratic Interpolation
Given approximate solution values a, b, c, with functionvalues fa, fb, fc, next approximate solution found by fittingquadratic polynomial to a, b, c as function of fa, fb, fc, thenevaluating polynomial at 0
Based on nontrivial derivation using Lagrangeinterpolation, we compute
u = fb/fc, v = fb/fa, w = fa/fc
p = v(w(u− w)(c− b)− (1− u)(b− a))
q = (w − 1)(u− 1)(v − 1)
then new approximate solution is b + p/q
Convergence rate is normally r ≈ 1.839
< interactive example >
Michael T. Heath Scientific Computing 34 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Example: Inverse Quadratic Interpolation
Use inverse quadratic interpolation to find root of
f(x) = x2 − 4 sin(x) = 0
Taking x = 1, 2, and 3 as starting values, we obtainx f(x) h
1.000000 −2.3658842.000000 0.3628103.000000 8.4355201.886318 −0.244343 −0.1136821.939558 0.030786 0.0532401.933742 −0.000060 −0.0058151.933754 0.000000 0.0000111.933754 0.000000 0.000000
Michael T. Heath Scientific Computing 35 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Linear Fractional Interpolation
Interpolation using rational fraction of form
φ(x) =x− u
vx− w
is especially useful for finding zeros of functions havinghorizontal or vertical asymptotesφ has zero at x = u, vertical asymptote at x = w/v, andhorizontal asymptote at y = 1/v
Given approximate solution values a, b, c, with functionvalues fa, fb, fc, next approximate solution is c + h, where
h =(a− c)(b− c)(fa − fb)fc
(a− c)(fc − fb)fa − (b− c)(fc − fa)fb
Convergence rate is normally r ≈ 1.839, same as forquadratic interpolation (inverse or regular)
Michael T. Heath Scientific Computing 36 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Example: Linear Fractional Interpolation
Use linear fractional interpolation to find root of
f(x) = x2 − 4 sin(x) = 0
Taking x = 1, 2, and 3 as starting values, we obtainx f(x) h
1.000000 −2.3658842.000000 0.3628103.000000 8.4355201.906953 −0.139647 −1.0930471.933351 −0.002131 0.0263981.933756 0.000013 −0.0004061.933754 0.000000 −0.000003
< interactive example >
Michael T. Heath Scientific Computing 37 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Safeguarded Methods
Rapidly convergent methods for solving nonlinearequations may not converge unless started close tosolution, but safe methods are slow
Hybrid methods combine features of both types ofmethods to achieve both speed and reliability
Use rapidly convergent method, but maintain bracketaround solution
If next approximate solution given by fast method fallsoutside bracketing interval, perform one iteration of safemethod, such as bisection
Michael T. Heath Scientific Computing 38 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Safeguarded Methods, continued
Fast method can then be tried again on smaller intervalwith greater chance of success
Ultimately, convergence rate of fast method should prevail
Hybrid approach seldom does worse than safe method,and usually does much better
Popular combination is bisection and inverse quadraticinterpolation, for which no derivatives required
Michael T. Heath Scientific Computing 39 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Bisection MethodFixed-Point Iteration and Newton’s MethodAdditional Methods
Zeros of Polynomials
For polynomial p(x) of degree n, one may want to find all nof its zeros, which may be complex even if coefficients arereal
Several approaches are available
Use root-finding method such as Newton’s or Muller’smethod to find one root, deflate it out, and repeatForm companion matrix of polynomial and use eigenvalueroutine to compute all its eigenvaluesUse method designed specifically for finding all roots ofpolynomial, such as Jenkins-Traub
Michael T. Heath Scientific Computing 40 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Fixed-Point IterationNewton’s MethodSecant Updating Methods
Systems of Nonlinear Equations
Solving systems of nonlinear equations is much more difficultthan scalar case because
Wider variety of behavior is possible, so determiningexistence and number of solutions or good starting guessis much more complex
There is no simple way, in general, to guaranteeconvergence to desired solution or to bracket solution toproduce absolutely safe method
Computational overhead increases rapidly with dimensionof problem
Michael T. Heath Scientific Computing 41 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Fixed-Point IterationNewton’s MethodSecant Updating Methods
Fixed-Point Iteration
Fixed-point problem for g : Rn → Rn is to find vector x suchthat
x = g(x)
Corresponding fixed-point iteration is
xk+1 = g(xk)
If ρ(G(x∗)) < 1, where ρ is spectral radius and G(x) isJacobian matrix of g evaluated at x, then fixed-pointiteration converges if started close enough to solution
Convergence rate is normally linear, with constant C givenby spectral radius ρ(G(x∗))
If G(x∗) = O, then convergence rate is at least quadratic
Michael T. Heath Scientific Computing 42 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Fixed-Point IterationNewton’s MethodSecant Updating Methods
Newton’s Method
In n dimensions, Newton’s method has form
xk+1 = xk − J(xk)−1f(xk)
where J(x) is Jacobian matrix of f ,
{J(x)}ij =∂fi(x)∂xj
In practice, we do not explicitly invert J(xk), but insteadsolve linear system
J(xk)sk = −f(xk)
for Newton step sk, then take as next iterate
xk+1 = xk + sk
Michael T. Heath Scientific Computing 43 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Fixed-Point IterationNewton’s MethodSecant Updating Methods
Example: Newton’s Method
Use Newton’s method to solve nonlinear system
f(x) =[x1 + 2x2 − 2x2
1 + 4x22 − 4
]= 0
Jacobian matrix is Jf (x) =[
1 22x1 8x2
]If we take x0 =
[1 2
]T , then
f(x0) =[
313
], Jf (x0) =
[1 22 16
]
Solving system[1 22 16
]s0 =
[−3−13
]gives s0 =
[−1.83−0.58
],
so x1 = x0 + s0 =[−0.83 1.42
]T
Michael T. Heath Scientific Computing 44 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Fixed-Point IterationNewton’s MethodSecant Updating Methods
Example, continuedEvaluating at new point,
f(x1) =[
04.72
], Jf (x1) =
[1 2
−1.67 11.3
]
Solving system[
1 2−1.67 11.3
]s1 =
[0
−4.72
]gives
s1 =[0.64 −0.32
]T , so x2 = x1 + s1 =[−0.19 1.10
]T
Evaluating at new point,
f(x2) =[
00.83
], Jf (x2) =
[1 2
−0.38 8.76
]Iterations eventually convergence to solution x∗ =
[0 1
]T
< interactive example >
Michael T. Heath Scientific Computing 45 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Fixed-Point IterationNewton’s MethodSecant Updating Methods
Convergence of Newton’s Method
Differentiating corresponding fixed-point operator
g(x) = x− J(x)−1f(x)
and evaluating at solution x∗ gives
G(x∗) = I − (J(x∗)−1J(x∗) +n∑
i=1
fi(x∗)Hi(x∗)) = O
where Hi(x) is component matrix of derivative of J(x)−1
Convergence rate of Newton’s method for nonlinearsystems is normally quadratic, provided Jacobian matrixJ(x∗) is nonsingular
But it must be started close enough to solution to converge
Michael T. Heath Scientific Computing 46 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Fixed-Point IterationNewton’s MethodSecant Updating Methods
Cost of Newton’s Method
Cost per iteration of Newton’s method for dense problem in ndimensions is substantial
Computing Jacobian matrix costs n2 scalar functionevaluations
Solving linear system costs O(n3) operations
Michael T. Heath Scientific Computing 47 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Fixed-Point IterationNewton’s MethodSecant Updating Methods
Secant Updating Methods
Secant updating methods reduce cost by
Using function values at successive iterates to buildapproximate Jacobian and avoiding explicit evaluation ofderivativesUpdating factorization of approximate Jacobian rather thanrefactoring it each iteration
Most secant updating methods have superlinear but notquadratic convergence rate
Secant updating methods often cost less overall thanNewton’s method because of lower cost per iteration
Michael T. Heath Scientific Computing 48 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Fixed-Point IterationNewton’s MethodSecant Updating Methods
Broyden’s Method
Broyden’s method is typical secant updating method
Beginning with initial guess x0 for solution and initialapproximate Jacobian B0, following steps are repeateduntil convergence
x0 = initial guessB0 = initial Jacobian approximationfor k = 0, 1, 2, . . .
Solve Bk sk = −f(xk) for sk
xk+1 = xk + sk
yk = f(xk+1)− f(xk)Bk+1 = Bk + ((yk −Bksk)sT
k )/(sTk sk)
end
Michael T. Heath Scientific Computing 49 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Fixed-Point IterationNewton’s MethodSecant Updating Methods
Broyden’s Method, continued
Motivation for formula for Bk+1 is to make least change toBk subject to satisfying secant equation
Bk+1(xk+1 − xk) = f(xk+1)− f(xk)
In practice, factorization of Bk is updated instead ofupdating Bk directly, so total cost per iteration is only O(n2)
Michael T. Heath Scientific Computing 50 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Fixed-Point IterationNewton’s MethodSecant Updating Methods
Example: Broyden’s Method
Use Broyden’s method to solve nonlinear system
f(x) =[x1 + 2x2 − 2x2
1 + 4x22 − 4
]= 0
If x0 =[1 2
]T , then f(x0) =[3 13
]T , and we choose
B0 = Jf (x0) =[1 22 16
]Solving system [
1 22 16
]s0 =
[−3−13
]gives s0 =
[−1.83−0.58
], so x1 = x0 + s0 =
[−0.83
1.42
]Michael T. Heath Scientific Computing 51 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Fixed-Point IterationNewton’s MethodSecant Updating Methods
Example, continued
Evaluating at new point x1 gives f(x1) =[
04.72
], so
y0 = f(x1)− f(x0) =[−3−8.28
]From updating formula, we obtain
B1 =[1 22 16
]+
[0 0
−2.34 −0.74
]=
[1 2
−0.34 15.3
]Solving system[
1 2−0.34 15.3
]s1 =
[0
−4.72
]gives s1 =
[0.59
−0.30
], so x2 = x1 + s1 =
[−0.241.120
]Michael T. Heath Scientific Computing 52 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Fixed-Point IterationNewton’s MethodSecant Updating Methods
Example, continued
Evaluating at new point x2 gives f(x2) =[
01.08
], so
y1 = f(x2)− f(x1) =[
0−3.64
]From updating formula, we obtain
B2 =[
1 2−0.34 15.3
]+
[0 0
1.46 −0.73
]=
[1 2
1.12 14.5
]
Iterations continue until convergence to solution x∗ =[01
]< interactive example >
Michael T. Heath Scientific Computing 53 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Fixed-Point IterationNewton’s MethodSecant Updating Methods
Robust Newton-Like Methods
Newton’s method and its variants may fail to convergewhen started far from solution
Safeguards can enlarge region of convergence ofNewton-like methods
Simplest precaution is damped Newton method, in whichnew iterate is
xk+1 = xk + αksk
where sk is Newton (or Newton-like) step and αk is scalarparameter chosen to ensure progress toward solution
Parameter αk reduces Newton step when it is too large,but αk = 1 suffices near solution and still yields fastasymptotic convergence rate
Michael T. Heath Scientific Computing 54 / 55
Nonlinear EquationsNumerical Methods in One Dimension
Methods for Systems of Nonlinear Equations
Fixed-Point IterationNewton’s MethodSecant Updating Methods
Trust-Region Methods
Another approach is to maintain estimate of trust regionwhere Taylor series approximation, upon which Newton’smethod is based, is sufficiently accurate for resultingcomputed step to be reliable
Adjusting size of trust region to constrain step size whennecessary usually enables progress toward solution evenstarting far away, yet still permits rapid converge once nearsolution
Unlike damped Newton method, trust region method maymodify direction as well as length of Newton step
More details on this approach will be given in Chapter 6
Michael T. Heath Scientific Computing 55 / 55
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
DefinitionsExistence and UniquenessOptimality Conditions
Optimization
Given function f : Rn → R, and set S ⊆ Rn, find x∗ ∈ Ssuch that f(x∗) ≤ f(x) for all x ∈ S
x∗ is called minimizer or minimum of f
It suffices to consider only minimization, since maximum off is minimum of −f
Objective function f is usually differentiable, and may belinear or nonlinear
Constraint set S is defined by system of equations andinequalities, which may be linear or nonlinear
Points x ∈ S are called feasible points
If S = Rn, problem is unconstrained
Michael T. Heath Scientific Computing 3 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
DefinitionsExistence and UniquenessOptimality Conditions
Optimization Problems
General continuous optimization problem:
min f(x) subject to g(x) = 0 and h(x) ≤ 0
where f : Rn → R, g : Rn → Rm, h : Rn → Rp
Linear programming : f , g, and h are all linear
Nonlinear programming : at least one of f , g, and h isnonlinear
Michael T. Heath Scientific Computing 4 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
DefinitionsExistence and UniquenessOptimality Conditions
Examples: Optimization Problems
Minimize weight of structure subject to constraint on itsstrength, or maximize its strength subject to constraint onits weight
Minimize cost of diet subject to nutritional constraints
Minimize surface area of cylinder subject to constraint onits volume:
minx1,x2
f(x1, x2) = 2πx1(x1 + x2)
subject to g(x1, x2) = πx21x2 − V = 0
where x1 and x2 are radius and height of cylinder, and V isrequired volume
Michael T. Heath Scientific Computing 5 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
DefinitionsExistence and UniquenessOptimality Conditions
Local vs Global Optimization
x∗ ∈ S is global minimum if f(x∗) ≤ f(x) for all x ∈ S
x∗ ∈ S is local minimum if f(x∗) ≤ f(x) for all feasible x insome neighborhood of x∗
Michael T. Heath Scientific Computing 6 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
DefinitionsExistence and UniquenessOptimality Conditions
Global Optimization
Finding, or even verifying, global minimum is difficult, ingeneral
Most optimization methods are designed to find localminimum, which may or may not be global minimum
If global minimum is desired, one can try several widelyseparated starting points and see if all produce sameresult
For some problems, such as linear programming, globaloptimization is more tractable
Michael T. Heath Scientific Computing 7 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
DefinitionsExistence and UniquenessOptimality Conditions
Existence of Minimum
If f is continuous on closed and bounded set S ⊆ Rn, thenf has global minimum on S
If S is not closed or is unbounded, then f may have nolocal or global minimum on S
Continuous function f on unbounded set S ⊆ Rn iscoercive if
lim‖x‖→∞
f(x) = +∞
i.e., f(x) must be large whenever ‖x‖ is large
If f is coercive on closed, unbounded set S ⊆ Rn, then fhas global minimum on S
Michael T. Heath Scientific Computing 8 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
DefinitionsExistence and UniquenessOptimality Conditions
Level Sets
Level set for function f : S ⊆ Rn → R is set of all points inS for which f has some given constant value
For given γ ∈ R, sublevel set is
Lγ = {x ∈ S : f(x) ≤ γ}
If continuous function f on S ⊆ Rn has nonempty sublevelset that is closed and bounded, then f has global minimumon S
If S is unbounded, then f is coercive on S if, and only if, allof its sublevel sets are bounded
Michael T. Heath Scientific Computing 9 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
DefinitionsExistence and UniquenessOptimality Conditions
Uniqueness of Minimum
Set S ⊆ Rn is convex if it contains line segment betweenany two of its points
Function f : S ⊆ Rn → R is convex on convex set S if itsgraph along any line segment in S lies on or below chordconnecting function values at endpoints of segment
Any local minimum of convex function f on convex setS ⊆ Rn is global minimum of f on S
Any local minimum of strictly convex function f on convexset S ⊆ Rn is unique global minimum of f on S
Michael T. Heath Scientific Computing 10 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
DefinitionsExistence and UniquenessOptimality Conditions
First-Order Optimality Condition
For function of one variable, one can find extremum bydifferentiating function and setting derivative to zero
Generalization to function of n variables is to find criticalpoint, i.e., solution of nonlinear system
∇f(x) = 0
where ∇f(x) is gradient vector of f , whose ith componentis ∂f(x)/∂xi
For continuously differentiable f : S ⊆ Rn → R, any interiorpoint x∗ of S at which f has local minimum must be criticalpoint of f
But not all critical points are minima: they can also bemaxima or saddle points
Michael T. Heath Scientific Computing 11 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
DefinitionsExistence and UniquenessOptimality Conditions
Second-Order Optimality Condition
For twice continuously differentiable f : S ⊆ Rn → R, wecan distinguish among critical points by consideringHessian matrix Hf (x) defined by
{Hf (x)}ij =∂2f(x)∂xi∂xj
which is symmetric
At critical point x∗, if Hf (x∗) is
positive definite, then x∗ is minimum of f
negative definite, then x∗ is maximum of f
indefinite, then x∗ is saddle point of f
singular, then various pathological situations are possible
Michael T. Heath Scientific Computing 12 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
DefinitionsExistence and UniquenessOptimality Conditions
Constrained OptimalityIf problem is constrained, only feasible directions arerelevant
For equality-constrained problem
min f(x) subject to g(x) = 0
where f : Rn → R and g : Rn → Rm, with m ≤ n, necessarycondition for feasible point x∗ to be solution is that negativegradient of f lie in space spanned by constraint normals,
−∇f(x∗) = JTg (x∗)λ
where Jg is Jacobian matrix of g, and λ is vector ofLagrange multipliers
This condition says we cannot reduce objective functionwithout violating constraints
Michael T. Heath Scientific Computing 13 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
DefinitionsExistence and UniquenessOptimality Conditions
Constrained Optimality, continued
Lagrangian function L : Rn+m → R, is defined by
L(x,λ) = f(x) + λT g(x)
Its gradient is given by
∇L(x,λ) =[∇f(x) + JT
g (x)λg(x)
]Its Hessian is given by
HL(x,λ) =[B(x,λ) JT
g (x)Jg(x) O
]where
B(x,λ) = Hf (x) +m∑
i=1
λiHgi(x)
Michael T. Heath Scientific Computing 14 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
DefinitionsExistence and UniquenessOptimality Conditions
Constrained Optimality, continued
Together, necessary condition and feasibility imply criticalpoint of Lagrangian function,
∇L(x,λ) =[∇f(x) + JT
g (x)λg(x)
]= 0
Hessian of Lagrangian is symmetric, but not positivedefinite, so critical point of L is saddle point rather thanminimum or maximum
Critical point (x∗,λ∗) of L is constrained minimum of f ifB(x∗,λ∗) is positive definite on null space of Jg(x∗)
If columns of Z form basis for null space, then testprojected Hessian ZT BZ for positive definiteness
Michael T. Heath Scientific Computing 15 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
DefinitionsExistence and UniquenessOptimality Conditions
Constrained Optimality, continued
If inequalities are present, then KKT optimality conditionsalso require nonnegativity of Lagrange multiplierscorresponding to inequalities, and complementaritycondition
Michael T. Heath Scientific Computing 16 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
DefinitionsExistence and UniquenessOptimality Conditions
Sensitivity and Conditioning
Function minimization and equation solving are closelyrelated problems, but their sensitivities differ
In one dimension, absolute condition number of root x∗ ofequation f(x) = 0 is 1/|f ′(x∗)|, so if |f(x)| ≤ ε, then|x− x∗| may be as large as ε/|f ′(x∗)|
For minimizing f , Taylor series expansion
f(x) = f(x∗ + h)= f(x∗) + f ′(x∗)h + 1
2 f ′′(x∗)h2 +O(h3)
shows that, since f ′(x∗) = 0, if |f(x)− f(x∗)| ≤ ε, then|x− x∗| may be as large as
√2ε/|f ′′(x∗)|
Thus, based on function values alone, minima can becomputed to only about half precision
Michael T. Heath Scientific Computing 17 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Golden Section SearchSuccessive Parabolic InterpolationNewton’s Method
Unimodality
For minimizing function of one variable, we need “bracket”for solution analogous to sign change for nonlinearequation
Real-valued function f is unimodal on interval [a, b] if thereis unique x∗ ∈ [a, b] such that f(x∗) is minimum of f on[a, b], and f is strictly decreasing for x ≤ x∗, strictlyincreasing for x∗ ≤ x
Unimodality enables discarding portions of interval basedon sample function values, analogous to interval bisection
Michael T. Heath Scientific Computing 18 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Golden Section SearchSuccessive Parabolic InterpolationNewton’s Method
Golden Section Search
Suppose f is unimodal on [a, b], and let x1 and x2 be twopoints within [a, b], with x1 < x2
Evaluating and comparing f(x1) and f(x2), we can discardeither (x2, b] or [a, x1), with minimum known to lie inremaining subinterval
To repeat process, we need compute only one newfunction evaluation
To reduce length of interval by fixed fraction at eachiteration, each new pair of points must have samerelationship with respect to new interval that previous pairhad with respect to previous interval
Michael T. Heath Scientific Computing 19 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Golden Section SearchSuccessive Parabolic InterpolationNewton’s Method
Golden Section Search, continued
To accomplish this, we choose relative positions of twopoints as τ and 1− τ , where τ2 = 1− τ , soτ = (
√5− 1)/2 ≈ 0.618 and 1− τ ≈ 0.382
Whichever subinterval is retained, its length will be τrelative to previous interval, and interior point retained willbe at position either τ or 1− τ relative to new interval
To continue iteration, we need to compute only one newfunction value, at complementary point
This choice of sample points is called golden sectionsearch
Golden section search is safe but convergence rate is onlylinear, with constant C ≈ 0.618
Michael T. Heath Scientific Computing 20 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Golden Section SearchSuccessive Parabolic InterpolationNewton’s Method
Golden Section Search, continuedτ = (
√5− 1)/2
x1 = a + (1− τ)(b− a); f1 = f(x1)x2 = a + τ(b− a); f2 = f(x2)while ((b− a) > tol) do
if (f1 > f2) thena = x1
x1 = x2
f1 = f2
x2 = a + τ(b− a)f2 = f(x2)
elseb = x2
x2 = x1
f2 = f1
x1 = a + (1− τ)(b− a)f1 = f(x1)
endend
Michael T. Heath Scientific Computing 21 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Golden Section SearchSuccessive Parabolic InterpolationNewton’s Method
Example: Golden Section Search
Use golden section search to minimize
f(x) = 0.5− x exp(−x2)
Michael T. Heath Scientific Computing 22 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Golden Section SearchSuccessive Parabolic InterpolationNewton’s Method
Example, continued
x1 f1 x2 f2
0.764 0.074 1.236 0.2320.472 0.122 0.764 0.0740.764 0.074 0.944 0.1130.652 0.074 0.764 0.0740.584 0.085 0.652 0.0740.652 0.074 0.695 0.0710.695 0.071 0.721 0.0710.679 0.072 0.695 0.0710.695 0.071 0.705 0.0710.705 0.071 0.711 0.071
< interactive example >
Michael T. Heath Scientific Computing 23 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Golden Section SearchSuccessive Parabolic InterpolationNewton’s Method
Successive Parabolic InterpolationFit quadratic polynomial to three function valuesTake minimum of quadratic to be new approximation tominimum of function
New point replaces oldest of three previous points andprocess is repeated until convergenceConvergence rate of successive parabolic interpolation issuperlinear, with r ≈ 1.324
Michael T. Heath Scientific Computing 24 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Golden Section SearchSuccessive Parabolic InterpolationNewton’s Method
Example: Successive Parabolic Interpolation
Use successive parabolic interpolation to minimize
f(x) = 0.5− x exp(−x2)
Michael T. Heath Scientific Computing 25 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Golden Section SearchSuccessive Parabolic InterpolationNewton’s Method
Example, continued
xk f(xk)0.000 0.5000.600 0.0811.200 0.2160.754 0.0730.721 0.0710.692 0.0710.707 0.071
< interactive example >
Michael T. Heath Scientific Computing 26 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Golden Section SearchSuccessive Parabolic InterpolationNewton’s Method
Newton’s MethodAnother local quadratic approximation is truncated Taylorseries
f(x + h) ≈ f(x) + f ′(x)h +f ′′(x)
2h2
By differentiation, minimum of this quadratic function of h isgiven by h = −f ′(x)/f ′′(x)
Suggests iteration scheme
xk+1 = xk − f ′(xk)/f ′′(xk)
which is Newton’s method for solving nonlinear equationf ′(x) = 0
Newton’s method for finding minimum normally hasquadratic convergence rate, but must be started closeenough to solution to converge < interactive example >
Michael T. Heath Scientific Computing 27 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Golden Section SearchSuccessive Parabolic InterpolationNewton’s Method
Example: Newton’s MethodUse Newton’s method to minimize f(x) = 0.5− x exp(−x2)
First and second derivatives of f are given by
f ′(x) = (2x2 − 1) exp(−x2)
andf ′′(x) = 2x(3− 2x2) exp(−x2)
Newton iteration for zero of f ′ is given by
xk+1 = xk − (2x2k − 1)/(2xk(3− 2x2
k))
Using starting guess x0 = 1, we obtain
xk f(xk)1.000 0.1320.500 0.1110.700 0.0710.707 0.071
Michael T. Heath Scientific Computing 28 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Golden Section SearchSuccessive Parabolic InterpolationNewton’s Method
Safeguarded Methods
As with nonlinear equations in one dimension,slow-but-sure and fast-but-risky optimization methods canbe combined to provide both safety and efficiency
Most library routines for one-dimensional optimization arebased on this hybrid approach
Popular combination is golden section search andsuccessive parabolic interpolation, for which no derivativesare required
Michael T. Heath Scientific Computing 29 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Direct Search Methods
Direct search methods for multidimensional optimizationmake no use of function values other than comparing them
For minimizing function f of n variables, Nelder-Meadmethod begins with n + 1 starting points, forming simplexin Rn
Then move to new point along straight line from currentpoint having highest function value through centroid ofother points
New point replaces worst point, and process is repeated
Direct search methods are useful for nonsmooth functionsor for small n, but expensive for larger n
< interactive example >
Michael T. Heath Scientific Computing 30 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Steepest Descent Method
Let f : Rn → R be real-valued function of n real variables
At any point x where gradient vector is nonzero, negativegradient, −∇f(x), points downhill toward lower values of f
In fact, −∇f(x) is locally direction of steepest descent: fdecreases more rapidly along direction of negativegradient than along any other
Steepest descent method: starting from initial guess x0,successive approximate solutions given by
xk+1 = xk − αk∇f(xk)
where αk is line search parameter that determines how farto go in given direction
Michael T. Heath Scientific Computing 31 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Steepest Descent, continued
Given descent direction, such as negative gradient,determining appropriate value for αk at each iteration isone-dimensional minimization problem
minαk
f(xk − αk∇f(xk))
that can be solved by methods already discussed
Steepest descent method is very reliable: it can alwaysmake progress provided gradient is nonzero
But method is myopic in its view of function’s behavior, andresulting iterates can zigzag back and forth, making veryslow progress toward solution
In general, convergence rate of steepest descent is onlylinear, with constant factor that can be arbitrarily close to 1
Michael T. Heath Scientific Computing 32 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Example: Steepest Descent
Use steepest descent method to minimize
f(x) = 0.5x21 + 2.5x2
2
Gradient is given by ∇f(x) =[
x1
5x2
]Taking x0 =
[51
], we have ∇f(x0) =
[55
]Performing line search along negative gradient direction,
minα0
f(x0 − α0∇f(x0))
exact minimum along line is given by α0 = 1/3, so next
approximation is x1 =[
3.333−0.667
]Michael T. Heath Scientific Computing 33 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Example, continued
xk f(xk) ∇f(xk)5.000 1.000 15.000 5.000 5.0003.333 −0.667 6.667 3.333 −3.3332.222 0.444 2.963 2.222 2.2221.481 −0.296 1.317 1.481 −1.4810.988 0.198 0.585 0.988 0.9880.658 −0.132 0.260 0.658 −0.6580.439 0.088 0.116 0.439 0.4390.293 −0.059 0.051 0.293 −0.2930.195 0.039 0.023 0.195 0.1950.130 −0.026 0.010 0.130 −0.130
Michael T. Heath Scientific Computing 34 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Example, continued
< interactive example >
Michael T. Heath Scientific Computing 35 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Newton’s Method
Broader view can be obtained by local quadraticapproximation, which is equivalent to Newton’s method
In multidimensional optimization, we seek zero of gradient,so Newton iteration has form
xk+1 = xk −H−1f (xk)∇f(xk)
where Hf (x) is Hessian matrix of second partialderivatives of f ,
{Hf (x)}ij =∂2f(x)∂xi∂xj
Michael T. Heath Scientific Computing 36 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Newton’s Method, continued
Do not explicitly invert Hessian matrix, but instead solvelinear system
Hf (xk)sk = −∇f(xk)
for Newton step sk, then take as next iterate
xk+1 = xk + sk
Convergence rate of Newton’s method for minimization isnormally quadratic
As usual, Newton’s method is unreliable unless startedclose enough to solution to converge
< interactive example >
Michael T. Heath Scientific Computing 37 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Example: Newton’s Method
Use Newton’s method to minimize
f(x) = 0.5x21 + 2.5x2
2
Gradient and Hessian are given by
∇f(x) =[
x1
5x2
]and Hf (x) =
[1 00 5
]
Taking x0 =[51
], we have ∇f(x0) =
[55
]Linear system for Newton step is
[1 00 5
]s0 =
[−5−5
], so
x1 = x0 + s0 =[51
]+[−5−1
]=[00
], which is exact solution
for this problem, as expected for quadratic function
Michael T. Heath Scientific Computing 38 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Newton’s Method, continued
In principle, line search parameter is unnecessary withNewton’s method, since quadratic model determineslength, as well as direction, of step to next approximatesolution
When started far from solution, however, it may still beadvisable to perform line search along direction of Newtonstep sk to make method more robust (damped Newton)
Once iterates are near solution, then αk = 1 should sufficefor subsequent iterations
Michael T. Heath Scientific Computing 39 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Newton’s Method, continued
If objective function f has continuous second partialderivatives, then Hessian matrix Hf is symmetric, andnear minimum it is positive definite
Thus, linear system for step to next iterate can be solved inonly about half of work required for LU factorization
Far from minimum, Hf (xk) may not be positive definite, soNewton step sk may not be descent direction for function,i.e., we may not have
∇f(xk)T sk < 0
In this case, alternative descent direction can becomputed, such as negative gradient or direction ofnegative curvature, and then perform line search
Michael T. Heath Scientific Computing 40 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Trust Region Methods
Alternative to line search is trust region method, in whichapproximate solution is constrained to lie within regionwhere quadratic model is sufficiently accurate
If current trust radius is binding, minimizing quadraticmodel function subject to this constraint may modifydirection as well as length of Newton step
Accuracy of quadratic model is assessed by comparingactual decrease in objective function with that predicted byquadratic model, and trust radius is increased ordecreased accordingly
Michael T. Heath Scientific Computing 41 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Trust Region Methods, continued
Michael T. Heath Scientific Computing 42 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Quasi-Newton Methods
Newton’s method costs O(n3) arithmetic and O(n2) scalarfunction evaluations per iteration for dense problem
Many variants of Newton’s method improve reliability andreduce overhead
Quasi-Newton methods have form
xk+1 = xk − αkB−1k ∇f(xk)
where αk is line search parameter and Bk is approximationto Hessian matrix
Many quasi-Newton methods are more robust thanNewton’s method, are superlinearly convergent, and havelower overhead per iteration, which often more than offsetstheir slower convergence rate
Michael T. Heath Scientific Computing 43 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Secant Updating Methods
Could use Broyden’s method to seek zero of gradient, butthis would not preserve symmetry of Hessian matrix
Several secant updating formulas have been developed forminimization that not only preserve symmetry inapproximate Hessian matrix, but also preserve positivedefiniteness
Symmetry reduces amount of work required by about half,while positive definiteness guarantees that quasi-Newtonstep will be descent direction
Michael T. Heath Scientific Computing 44 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
BFGS Method
One of most effective secant updating methods for minimizationis BFGS
x0 = initial guessB0 = initial Hessian approximationfor k = 0, 1, 2, . . .
Solve Bk sk = −∇f(xk) for sk
xk+1 = xk + sk
yk = ∇f(xk+1)−∇f(xk)Bk+1 = Bk + (yky
Tk )/(yT
k sk) − (BksksTk Bk)/(sT
k Bksk)end
Michael T. Heath Scientific Computing 45 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
BFGS Method, continued
In practice, factorization of Bk is updated rather than Bk
itself, so linear system for sk can be solved at cost of O(n2)rather than O(n3) work
Unlike Newton’s method for minimization, no secondderivatives are required
Can start with B0 = I, so initial step is along negativegradient, and then second derivative information isgradually built up in approximate Hessian matrix oversuccessive iterations
BFGS normally has superlinear convergence rate, eventhough approximate Hessian does not necessarilyconverge to true Hessian
Line search can be used to enhance effectivenessMichael T. Heath Scientific Computing 46 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Example: BFGS Method
Use BFGS to minimize f(x) = 0.5x21 + 2.5x2
2
Gradient is given by ∇f(x) =[
x1
5x2
]Taking x0 =
[5 1
]T and B0 = I, initial step is negativegradient, so
x1 = x0 + s0 =[51
]+[−5−5
]=[
0−4
]Updating approximate Hessian using BFGS formula, weobtain
B1 =[0.667 0.3330.333 0.667
]Then new step is computed and process is repeated
Michael T. Heath Scientific Computing 47 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Example: BFGS Method
xk f(xk) ∇f(xk)5.000 1.000 15.000 5.000 5.0000.000 −4.000 40.000 0.000 −20.000
−2.222 0.444 2.963 −2.222 2.2220.816 0.082 0.350 0.816 0.408
−0.009 −0.015 0.001 −0.009 −0.077−0.001 0.001 0.000 −0.001 0.005
Increase in function value can be avoided by using linesearch, which generally enhances convergence
For quadratic objective function, BFGS with exact linesearch finds exact solution in at most n iterations, where nis dimension of problem < interactive example >
Michael T. Heath Scientific Computing 48 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Conjugate Gradient Method
Another method that does not require explicit secondderivatives, and does not even store approximation toHessian matrix, is conjugate gradient (CG) method
CG generates sequence of conjugate search directions,implicitly accumulating information about Hessian matrix
For quadratic objective function, CG is theoretically exactafter at most n iterations, where n is dimension of problem
CG is effective for general unconstrained minimization aswell
Michael T. Heath Scientific Computing 49 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Conjugate Gradient Method, continued
x0 = initial guessg0 = ∇f(x0)s0 = −g0
for k = 0, 1, 2, . . .Choose αk to minimize f(xk + αksk)xk+1 = xk + αksk
gk+1 = ∇f(xk+1)βk+1 = (gT
k+1gk+1)/(gTk gk)
sk+1 = −gk+1 + βk+1sk
end
Alternative formula for βk+1 is
βk+1 = ((gk+1 − gk)T gk+1)/(gTk gk)
Michael T. Heath Scientific Computing 50 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Example: Conjugate Gradient Method
Use CG method to minimize f(x) = 0.5x21 + 2.5x2
2
Gradient is given by ∇f(x) =[
x1
5x2
]Taking x0 =
[5 1
]T , initial search direction is negativegradient,
s0 = −g0 = −∇f(x0) =[−5−5
]Exact minimum along line is given by α0 = 1/3, so nextapproximation is x1 =
[3.333 −0.667
]T , and we computenew gradient,
g1 = ∇f(x1) =[
3.333−3.333
]Michael T. Heath Scientific Computing 51 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Example, continued
So far there is no difference from steepest descent method
At this point, however, rather than search along newnegative gradient, we compute instead
β1 = (gT1 g1)/(gT
0 g0) = 0.444
which gives as next search direction
s1 = −g1 + β1s0 =[−3.333
3.333
]+ 0.444
[−5−5
]=[−5.556
1.111
]Minimum along this direction is given by α1 = 0.6, whichgives exact solution at origin, as expected for quadraticfunction
< interactive example >
Michael T. Heath Scientific Computing 52 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Truncated Newton Methods
Another way to reduce work in Newton-like methods is tosolve linear system for Newton step by iterative method
Small number of iterations may suffice to produce step asuseful as true Newton step, especially far from overallsolution, where true Newton step may be unreliableanyway
Good choice for linear iterative solver is CG method, whichgives step intermediate between steepest descent andNewton-like step
Since only matrix-vector products are required, explicitformation of Hessian matrix can be avoided by using finitedifference of gradient along given vector
Michael T. Heath Scientific Computing 53 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Nonlinear Least Squares
Given data (ti, yi), find vector x of parameters that gives“best fit” in least squares sense to model function f(t, x),where f is nonlinear function of x
Define components of residual function
ri(x) = yi − f(ti,x), i = 1, . . . ,m
so we want to minimize φ(x) = 12r
T (x)r(x)
Gradient vector is ∇φ(x) = JT (x)r(x) and Hessian matrixis
Hφ(x) = JT (x)J(x) +m∑
i=1
ri(x)Hi(x)
where J(x) is Jacobian of r(x), and Hi(x) is Hessian ofri(x)
Michael T. Heath Scientific Computing 54 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Nonlinear Least Squares, continued
Linear system for Newton step is(JT (xk)J(xk) +
m∑i=1
ri(xk)Hi(xk)
)sk = −JT (xk)r(xk)
m Hessian matrices Hi are usually inconvenient andexpensive to compute
Moreover, in Hφ each Hi is multiplied by residualcomponent ri, which is small at solution if fit of modelfunction to data is good
Michael T. Heath Scientific Computing 55 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Gauss-Newton Method
This motivates Gauss-Newton method for nonlinear leastsquares, in which second-order term is dropped and linearsystem
JT (xk)J(xk)sk = −JT (xk)r(xk)
is solved for approximate Newton step sk at each iteration
This is system of normal equations for linear least squaresproblem
J(xk)sk∼= −r(xk)
which can be solved better by QR factorization
Next approximate solution is then given by
xk+1 = xk + sk
and process is repeated until convergence
Michael T. Heath Scientific Computing 56 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Example: Gauss-Newton Method
Use Gauss-Newton method to fit nonlinear model function
f(t, x) = x1 exp(x2t)
to datat 0.0 1.0 2.0 3.0y 2.0 0.7 0.3 0.1
For this model function, entries of Jacobian matrix ofresidual function r are given by
{J(x)}i,1 =∂ri(x)∂x1
= − exp(x2ti)
{J(x)}i,2 =∂ri(x)∂x2
= −x1ti exp(x2ti)
Michael T. Heath Scientific Computing 57 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Example, continued
If we take x0 =[1 0
]T , then Gauss-Newton step s0 isgiven by linear least squares problem
−1 0−1 −1−1 −2−1 −3
s0∼=
−10.30.70.9
whose solution is s0 =
[0.69
−0.61
]Then next approximate solution is given by x1 = x0 + s0,and process is repeated until convergence
Michael T. Heath Scientific Computing 58 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Example, continued
xk ‖r(xk)‖221.000 0.000 2.3901.690 −0.610 0.2121.975 −0.930 0.0071.994 −1.004 0.0021.995 −1.009 0.0021.995 −1.010 0.002
< interactive example >
Michael T. Heath Scientific Computing 59 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Gauss-Newton Method, continued
Gauss-Newton method replaces nonlinear least squaresproblem by sequence of linear least squares problemswhose solutions converge to solution of original nonlinearproblem
If residual at solution is large, then second-order termomitted from Hessian is not negligible, and Gauss-Newtonmethod may converge slowly or fail to converge
In such “large-residual” cases, it may be best to usegeneral nonlinear minimization method that takes intoaccount true full Hessian matrix
Michael T. Heath Scientific Computing 60 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Levenberg-Marquardt Method
Levenberg-Marquardt method is another useful alternativewhen Gauss-Newton approximation is inadequate or yieldsrank deficient linear least squares subproblem
In this method, linear system at each iteration is of form
(JT (xk)J(xk) + µkI)sk = −JT (xk)r(xk)
where µk is scalar parameter chosen by some strategy
Corresponding linear least squares problem is[J(xk)√
µkI
]sk∼=[−r(xk)
0
]With suitable strategy for choosing µk, this method can bevery robust in practice, and it forms basis for severaleffective software packages < interactive example >
Michael T. Heath Scientific Computing 61 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Equality-Constrained Optimization
For equality-constrained minimization problem
min f(x) subject to g(x) = 0
where f : Rn → R and g : Rn → Rm, with m ≤ n, we seekcritical point of Lagrangian L(x,λ) = f(x) + λT g(x)
Applying Newton’s method to nonlinear system
∇L(x,λ) =[∇f(x) + JT
g (x)λg(x)
]= 0
we obtain linear system[B(x,λ) JT
g (x)Jg(x) O
] [sδ
]= −
[∇f(x) + JT
g (x)λg(x)
]for Newton step (s, δ) in (x,λ) at each iteration
Michael T. Heath Scientific Computing 62 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Sequential Quadratic Programming
Foregoing block 2× 2 linear system is equivalent toquadratic programming problem, so this approach isknown as sequential quadratic programming
Types of solution methods include
Direct solution methods, in which entire block 2× 2 systemis solved directlyRange space methods, based on block elimination in block2× 2 linear systemNull space methods, based on orthogonal factorization ofmatrix of constraint normals, JT
g (x)
< interactive example >
Michael T. Heath Scientific Computing 63 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Merit Function
Once Newton step (s, δ) determined, we need meritfunction to measure progress toward overall solution foruse in line search or trust region
Popular choices include penalty function
φρ(x) = f(x) + 12 ρ g(x)T g(x)
and augmented Lagrangian function
Lρ(x,λ) = f(x) + λT g(x) + 12 ρ g(x)T g(x)
where parameter ρ > 0 determines relative weighting ofoptimality vs feasibility
Given starting guess x0, good starting guess for λ0 can beobtained from least squares problem
JTg (x0) λ0
∼= −∇f(x0)
Michael T. Heath Scientific Computing 64 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Inequality-Constrained Optimization
Methods just outlined for equality constraints can beextended to handle inequality constraints by using activeset strategy
Inequality constraints are provisionally divided into thosethat are satisfied already (and can therefore be temporarilydisregarded) and those that are violated (and are thereforetemporarily treated as equality constraints)
This division of constraints is revised as iterations proceeduntil eventually correct constraints are identified that arebinding at solution
Michael T. Heath Scientific Computing 65 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Penalty Methods
Merit function can also be used to convertequality-constrained problem into sequence ofunconstrained problems
If x∗ρ is solution to
minx
φρ(x) = f(x) + 12 ρ g(x)T g(x)
then under appropriate conditions
limρ→∞
x∗ρ = x∗
This enables use of unconstrained optimization methods,but problem becomes ill-conditioned for large ρ, so wesolve sequence of problems with gradually increasingvalues of ρ, with minimum for each problem used asstarting point for next problem < interactive example >
Michael T. Heath Scientific Computing 66 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Barrier MethodsFor inequality-constrained problems, another alternative isbarrier function, such as
φµ(x) = f(x)− µ
p∑i=1
1hi(x)
or
φµ(x) = f(x)− µ
p∑i=1
log(−hi(x))
which increasingly penalize feasible points as theyapproach boundary of feasible regionAgain, solutions of unconstrained problem approach x∗ asµ → 0, but problems are increasingly ill-conditioned, sosolve sequence of problems with decreasing values of µBarrier functions are basis for interior point methods forlinear programming
Michael T. Heath Scientific Computing 67 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Example: Constrained Optimization
Consider quadratic programming problem
minx
f(x) = 0.5x21 + 2.5x2
2
subject tog(x) = x1 − x2 − 1 = 0
Lagrangian function is given by
L(x, λ) = f(x) + λ g(x) = 0.5x21 + 2.5x2
2 + λ(x1 − x2 − 1)
Since
∇f(x) =[
x1
5x2
]and Jg(x) =
[1 −1
]we have
∇xL(x, λ) = ∇f(x) + JTg (x)λ =
[x1
5x2
]+ λ
[1
−1
]Michael T. Heath Scientific Computing 68 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Example, continued
So system to be solved for critical point of Lagrangian is
x1 + λ = 05x2 − λ = 0x1 − x2 = 1
which in this case is linear system1 0 10 5 −11 −1 0
x1
x2
λ
=
001
Solving this system, we obtain solution
x1 = 0.833, x2 = −0.167, λ = −0.833
Michael T. Heath Scientific Computing 69 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Example, continued
Michael T. Heath Scientific Computing 70 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Linear Programming
One of most important and common constrainedoptimization problems is linear programming
One standard form for such problems is
min f(x) = cT x subject to Ax = b and x ≥ 0
where m < n, A ∈ Rm×n, b ∈ Rm, and c,x ∈ Rn
Feasible region is convex polyhedron in Rn, and minimummust occur at one of its vertices
Simplex method moves systematically from vertex tovertex until minimum point is found
Michael T. Heath Scientific Computing 71 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Linear Programming, continued
Simplex method is reliable and normally efficient, able tosolve problems with thousands of variables, but canrequire time exponential in size of problem in worst case
Interior point methods for linear programming developed inrecent years have polynomial worst case solution time
These methods move through interior of feasible region,not restricting themselves to investigating only its vertices
Although interior point methods have significant practicalimpact, simplex method is still predominant method instandard packages for linear programming, and itseffectiveness in practice is excellent
Michael T. Heath Scientific Computing 72 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Example: Linear Programming
To illustrate linear programming, consider
minx
= cT x = −8x1 − 11x2
subject to linear inequality constraints
5x1 + 4x2 ≤ 40, −x1 + 3x2 ≤ 12, x1 ≥ 0, x2 ≥ 0
Minimum value must occur at vertex of feasible region, inthis case at x1 = 3.79, x2 = 5.26, where objective functionhas value −88.2
Michael T. Heath Scientific Computing 73 / 74
Optimization ProblemsOne-Dimensional OptimizationMulti-Dimensional Optimization
Unconstrained OptimizationNonlinear Least SquaresConstrained Optimization
Example, continued
Michael T. Heath Scientific Computing 74 / 74
AMS527: Numerical Analysis IISupplementary Material onNumerical Optimization
Xiangmin Jiao
SUNY Stony Brook
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 1 / 21
Outline
1 BFGS Method
2 Conjugate Gradient Methods
3 Constrained Optimization
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 2 / 21
BFGS Method
BFGS Method is one of most effective secant updating methods forminimizationNamed after Broyden, Fletcher, Goldfarb, and ShannoUnlike Broyden’s method, BFGS preserves the symmetry ofapproximate Hessian matrixIn addition, BFGS preserves the positive definiteness of theapproximate Hessian matrixReference: J. Nocedal, S. J. Wright, Numerical Optimization, 2ndedition, Springer, 2006. Section 6.1.
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 3 / 21
Algorithm
x0 =initial guessB0 =initial Hessian approximationfor k =0, 1, 2, . . .
Solve Bksk = −∇f (xk) for skxk+1 = xk + skyk = ∇f (xk+1)−∇f (xk)Bk+1 = Bk + (ykyT
k )/(yTk sk)− (BksksT
k Bk)/(sTk Bksk)
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 4 / 21
Motivation of BFGSLet sk = xk+1 − xk and yk = ∇f (xk+1)−∇f (xk)
Matrix Bk+1 should satisfy secant equation
Bk+1sk = yk
In addition, Bk+1 is positive definition, which requires sTk yk > 0
There are infinite number of Bk+1 that satisfies secant equationDavidon (1950s) proposed to choose Bk+1 to be closest to Bk , i.e.,
minB‖B − Bk‖
Subject to B = BT , Bsk = yk .
BFGS proposed to choose Bk+1 so that B−1k+1 is closest to B−1
k , i.e.,
minB‖B−1 − B−1
k ‖
Subject to B = BT , Bsk = yk .
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 5 / 21
Properties of BFGS
BFGS normally has superlinear convergence rate, even thoughapproximate Hessian does not necessarily converge to true HessianApproximate Hessian preserves positive definiteness
I Key idea of proof: Let Hk denote B−1k . For any vector z 6= 0, and let
w = z − ρkyk(sTk z), where ρk > 0. Then it can be shown that
zTHk+1z = wTHkw + ρk(sTk z)2 ≥ 0.
If sTk z = 0, then w = z 6= 0. So zTHk+1z > 0.
Line search can be used to enhance effectiveness of BFGS. If exact linesearch is performed at each iteration, BFGS terminates at exactsolution in at most n iterations for a quadratic objective function
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 6 / 21
Outline
1 BFGS Method
2 Conjugate Gradient Methods
3 Constrained Optimization
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 7 / 21
Motivation of Conjugate Gradients
Conjugate gradient can be used to solve a linear system Ax = b,where A is symmetric positive definite (SPD)If A is m ×m SPD, then quadratic function
ϕ(x) =12xTAx − xTb
has unique minimumNegative gradient of this function is residual vector
−∇ϕ(x) = b − Ax = r
so minimum is obtained precisely when Ax = b
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 8 / 21
Search Direction in Conjugate Gradients
Optimization methods have form
xn+1 = xn + αnpn
where pn is search direction and α is step length chosen to minimizeϕ(xn + αnpn)
Line search parameter can be determined analytically asαn = rT
n pn/pTn Apn
In CG, pn is chosen to be A-conjugate (or A-orthogonal) to previoussearch directions, i.e., pT
n Apj = 0 for j < n
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 9 / 21
Optimality of Step LengthSelect step length αn over vector pn−1 to minimizeϕ(x) = 1
2xTAx − xTb
Let xn = xn−1 + αnpn−1,
ϕ(xn) =12(xn−1 + αnpn−1)
TA(xn−1 + αnpn−1)− (xn−1 + αnpn−1)Tb
=12α2
npTn−1Apn−1 + αnpT
n−1Axn−1 − αnpTn−1b + constant
=12α2
npTn−1Apn−1 − αnpT
n−1rn−1 + constant
Therefore,
dϕdαn
= 0⇒ αnpTn−1Apn−1 − pT
n−1rn−1 = 0⇒ αn =pT
n−1rn−1
pTn−1Apn−1
.
In addition, pTn−1rn−1 = rT
n−1rn−1 because pn−1 = rn−1 + βnpn−2and rT
n−1pn−2 = 0 due to the following theorem.
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 10 / 21
Conjugate Gradient Method
Algorithm: Conjugate Gradient Methodx0 = 0, r0 = b, p0 = r0for n = 1 to 1, 2, 3, . . .
αn = (rTn−1rn−1)/(pT
n−1Apn−1) step lengthxn = xn−1 + αnpn−1 approximate solutionrn = rn−1 − αnApn−1 residualβn = (rT
n rn)/(rTn−1rn−1) improvement this step
pn = rn + βnpn−1 search direction
Only one matrix-vector product Apn−1 per iterationApart from matrix-vector product, #operations per iteration is O(m)
CG can be viewed as minimization of quadratic functionϕ(x) = 1
2xTAx − xTb by modifying steepest descent
First proposed by Hestens and Stiefel in 1950s
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 11 / 21
An Alternative Interpretation of CG
Algorithm: CGx0 = 0, r0 = b, p0 = r0for n =1, 2, 3, . . .αn = rT
n−1rn−1/(pTn−1Apn−1)
xn = xn−1 + αnpn−1rn = rn−1 − αnApn−1βn = rT
n rn/(rTn−1rn−1)
pn = rn + βnpn−1
Algorithm: A non-standard CGx0 = 0, r0 = b, p0 = r0for n =1, 2, 3, . . .αn = rT
n−1pn−1/(pTn−1Apn−1)
xn = xn−1 + αnpn−1rn = b − Axnβn = −rT
n Apn−1/(pTn−1Apn−1)
pn = rn + βnpn−1
The non-standard one is less efficient but easier to understandIt is easy to see rn = rn−1 − αnApn−1 = b − Axn
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 12 / 21
Comparison of Linear and Nonlinear CG
Algorithm: Linear CGx0 = 0, r0 = b,p0 = r0for n =1, 2, 3, . . .αn = rT
n−1rn−1/(pTn−1Apn−1)
xn = xn−1 + αnpn−1rn = rn−1 − αnApn−1βn = rT
n rn/(rTn−1rn−1)
pn = rn + βnpn−1
Algorithm: Non-linear CGx0 = initial guess, g0 = ∇f (x0),s0 = −g0for k = 0, 1, 2, . . .
Choose αk to min f (xk + αksk)xk+1 = xk + αkskgk+1 = ∇f (xk+1)βk+1 = (gT
k+1gk+1)/(gTk gk)
sk+1 = −gk+1 + βk+1sk
βk+1 = (gTk+1gk+1)/(gT
k gk) was due to Fletcher and Reeves (1964)
An alternative formula βk+1 = (gk+1 − gk)Tgk+1/(gT
k gk) was dueto Polak and Riebiere (1969)
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 13 / 21
Properties of Conjugate GradientsKrylov subspaces for Ax = b is Kn = {b,Ab, . . . ,An−1b}.
TheoremIf rn−1 6= 0, spaces spanned by approximate solutions xn, search directionspn, and residuals rn are all equal to Krylov subspaces
Kn = 〈x1, x2, . . . , xn〉 = 〈p0,p1, . . . ,pn−1〉= 〈r0, r1, . . . , rn−1〉 = 〈b,Ab, . . . ,An−1b〉
The residual are orthogonal (i.e., rTn r j = 0 for j < n) and search directions
are A-conjugate (i.e, pTn Apj = 0 for j < n).
TheoremIf rn−1 6= 0, then error en = x∗ − xn are minimized in A-norm in Kn.
Because Kn grows monotonically, error decreases monotonically.
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 14 / 21
Rate of Convergence
Some important convergence resultsI If A has n distinct eigenvalues, CG converges in at most n stepsI If A has 2-norm condition number κ, the errors are
‖en‖A‖e0‖A
≤ 2(√
κ− 1√κ+ 1
)n
which is ≈ 2(1− 2√
κ
)nas κ→∞. So convergence is expected in
O(√κ) iterations.
In general, CG performs well with clustered eigenvalues
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 15 / 21
Outline
1 BFGS Method
2 Conjugate Gradient Methods
3 Constrained Optimization
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 16 / 21
Equality-Constrained MinimizationEquality-constrained problem has form
minx∈Rn
f (x) subject to g(x) = 0
where objective function f : Rn → R and constraints g : Rn → Rm,where m ≤ nNecessary condition for feasible point x to be solution is that negativegradient of f lie in space spanned by constraint normals, i.e.,
−∇f (x∗) = JTg (x
∗)λ,
where Jg is Jacobian matrix of g , and λ is vector of LagrangemultipliersTherefore, constrained local minimum must be critical point ofLagrangian function
L(x ,λ) = f (x) + λTg(x)
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 17 / 21
First-Order and Second-Order Optimality ConditionsEquality-constrained minimization can be reduced to solving
∇L(x ,λ) =[∇f (x) + JT
g (x)λg(x)
]= 0,
which is known as Karush-Kuhn-Tucker (or KKT) condition forconstrained local minimum.Hessian of Lagrangian function
HL(x ,λ) =[
B(x ,λ) JTg (x)
Jg (x) 0
]where B(x ,λ) = H f (x) +
∑mi=1 λiHgi (x). HL is sometimes called
KKT (Karush-Kuhn-Tucker) matrix. HL is symmetric, but not ingeneral positive definiteCritical point (x∗,λ∗) of L is constrained minimum if B(x∗,λ∗) ispositive definite on null space of Jg (x∗).Let Z form basis of null (Jg (x∗)), then projected Hessian ZTBZshould be positive definite
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 18 / 21
Sequential Quadratic Programming∇L(x ,λ) = 0 can be solved using Newton’s method. kth iteration ofNewton’s step is[
B(xk ,λk) JTg (xk)
Jg (xk) 0
] [skδk
]= −
[∇f (xk) + JT
g (xk)λkg(xk)
],
and then xk+1 = xk + sk and λk+1 = λk + δk
Above system of equations is first-order optimality condition forconstrained optimization problem
mins12sTB(xk ,λk)s + sT
(∇f (xk) + JT
g (xk)λk
)subject to
Jg (xk)s + g(xk) = 0.
This problem is quadratic programming problem, so approach usingNewton’s method is known as sequential quadratic programing
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 19 / 21
Solving KKT System
KKT system[
B JT
J 0
] [sδ
]= −
[wg
]can be solved in several
waysDirect solution
I Solve system using method for symmetric indefinite factorization, suchas LDLT with pivoting, or
I Use iterative method such as GMRES, MINRES
Range-space methodI Use block elimination and obtain symmetric system(
JB−1JT)δ = g − JB−1w
and thenBs = −w − JTδ
I First equation finds δ in range space of JI It is attractive when number of constraints m is relatively small,
because JB−1JT is m ×mI However, it requires B to be nonsingular and J has full rank. Also,
condition number of JB−1JT may be largeXiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 20 / 21
Solving KKT System Cont’dNull space method
Let Z be composed of null space of J , and it can be obtained by QRfactorization of JT . Then JZ = 0Let JY = RT , and write s = Yu + Zv . Second block row yields
Js = J(Yu + Zv) = RTu = −g
and premultiplying first block row by ZT yields(ZTBZ
)v = −ZT (w − BYu)
Finally,Y TJTδ = Rδ = −Y T (w + Bs)
This method method is advantageous when n −m is smallIt is more stable than range-space method. Also, B does not need tobe nonsingular
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 21 / 21
AMS527: Numerical Analysis IILinear Programming
Xiangmin Jiao
SUNY Stony Brook
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 1 / 8
Linear Programming
Linear programming has linear objective function and linear equalityand inequality constraintsExample: Maximize profit of combination of wheat and barley, butwith limited budget of land, fertilizer, and insecticide. Let x1 and x2be areas planted for wheat and barley, we have linear programmingproblem
maximize c1x1 + c2x2 {maximize revenue}0 ≤ x1 + x2 ≤ L {limit on area}
F1x1 + F2x2 ≤ F {limit on fertilizer}P1x1 + P2x2 ≤ P {limit on insecticide}
x1 ≥ 0, x2 ≥ 0 {nonnegative land}
Linear programming is typically solved by simplex methods or interiorpoint methods
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 2 / 8
Standard Form of Linear Programming
Linear programming has many forms. A standard form (called slackform) is
min cx subject to Ax = b and x ≥ 0
Simplex method and interior-point method requires slack formPrevious example can be converted into standard form
minimize (−c1) x1 + (−c2)x2 {maximize revenue}x1 + x2 + x3 = L {limit on area}
F1x1 + F2x2 + x4 = F {limit on fertilizer}P1x1 + P2x2 + x5 = P {limit on insecticide}
x1, x2, x3, x4, x5 ≥ 0 {nonnegativity}
Here, x3, x4, and x5 are called slack variables
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 3 / 8
Duality
m equations Ax = b have m corresponding Lagrange multipliers in yPrimal problem
Minimize cTx subject to Ax = b and x ≥ 0
Dual problem
Maximize bTy subject to ATy ≤ c
Weak duality: bTy ≤ cTx for any feasible x and y
I because bTy = (Ax)T y = xT(ATy
)≤ xTc = cTx
Strong duality: If both feasible sets of primal and dual problems arenonempty, then cTx∗ = bTy∗ at optimal x∗ and y∗
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 4 / 8
Simplex Methods
Developed by George Dantzig in 1947Key observation: Feasible region is convex polytope in Rn, andminimum must occur at one of its verticesBasic idea: Construct a feasible solution at a vertex of the polytope,walk along a path on the edges of the polytope to vertices withnon-decreasing values of the objective function, until an optimum isreachedSimplex method in the worst case can be slow, because number ofcorners is exponential with m and nHowever, its average-case complexity is polynomial time, and inpractice, best corner is often found in 2m steps
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 5 / 8
Interior Point Methods
First proposed by Narendra Karmarkar in 1984In contrast to simplex methods, interior point methods move throughthe interior of the feasible regionBarrier problem
minimize cTx − θ(log x1 + · · ·+ log xn) with Ax = b
When any xi touches zero, extra cost −θ log xi blows upBarrier problem gives approximate problem for each θ. Its Lagrangianis
L(x , y , θ) = cTx − θ(∑
log xi
)− yT (Ax − b)
The derivatives ∂L/∂xj = cj − θxj− (ATy)j = 0, or xjsj = θ, where
s = c − ATy
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 6 / 8
Newton Step
n optimality equations xjsj = θ are nonlinear, and are solved iterativelyusing Newton’s methodTo determine increment ∆x , ∆y , and ∆s, we need to solve(xi + ∆xi )(si + ∆si ) = θ. It is typical to ignore second order term∆xi∆si . Then linear equations become
A∆x = 0
AT ∆y + ∆s = 0sj∆xj + xj∆sj = θ − xjsj .
The iteration has quadratic convergence for each θ, and θ approacheszeroGilbert Strang, Computational Science and Engineering, WellesleyCambridge, 2007. Section 8.6, Linear Programming and Duality.
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 7 / 8
ExampleMinimize cTx = 5x1 + 3x2 + 8x3 with xi ≥ 0 and Ax = x1 + x2 + 2x3 = 4.
Barrier Lagrangian isL = (5x1 +3x2 +8x3)+θ(log x1 + log x2 + log x3)−y(x1 +x2 +2x3−4)Optimality equation gives us:
s = c − ATy s1 = 5− y , s2 = 3− y , s3 = 8− 2y∂L/∂xi = 0 x1s1 = x2s2 = x3s3 = θ∂L/∂y = 0 x1 + x2 + 2x3 = 4
.
Start from an interior point x1 = x2 = x3 = 1, y = 1, and s = (3, 1, 4).From A∆x = 0 and xjsj .+ sj∆xj + xj∆sj = θ, we obtain equations
3∆x1 − 1∆y = θ − 31∆x2 − 1∆y = θ − 14∆x3 − 2∆y = θ − 4
∆x1 + ∆x2 + 2∆x3 = 0.
Given θ = 4/3, we then obtain xnew = (2/3, 2, 2/3) and ynew = 8/3,whereas x∗ = (0, 4, 0) and y∗ = 3
Xiangmin Jiao (SUNY Stony Brook) AMS527: Numerical Analysis II 8 / 8
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Scientific Computing: An Introductory SurveyChapter 7 – Interpolation
Prof. Michael T. Heath
Department of Computer ScienceUniversity of Illinois at Urbana-Champaign
Copyright c© 2002. Reproduction permittedfor noncommercial, educational use only.
Michael T. Heath Scientific Computing 1 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Outline
1 Interpolation
2 Polynomial Interpolation
3 Piecewise Polynomial Interpolation
Michael T. Heath Scientific Computing 2 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
MotivationChoosing InterpolantExistence and Uniqueness
Interpolation
Basic interpolation problem: for given data
(t1, y1), (t2, y2), . . . (tm, ym) with t1 < t2 < · · · < tm
determine function f : R → R such that
f(ti) = yi, i = 1, . . . ,m
f is interpolating function, or interpolant, for given data
Additional data might be prescribed, such as slope ofinterpolant at given points
Additional constraints might be imposed, such assmoothness, monotonicity, or convexity of interpolant
f could be function of more than one variable, but we willconsider only one-dimensional case
Michael T. Heath Scientific Computing 3 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
MotivationChoosing InterpolantExistence and Uniqueness
Purposes for Interpolation
Plotting smooth curve through discrete data points
Reading between lines of table
Differentiating or integrating tabular data
Quick and easy evaluation of mathematical function
Replacing complicated function by simple one
Michael T. Heath Scientific Computing 4 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
MotivationChoosing InterpolantExistence and Uniqueness
Interpolation vs Approximation
By definition, interpolating function fits given data pointsexactly
Interpolation is inappropriate if data points subject tosignificant errors
It is usually preferable to smooth noisy data, for exampleby least squares approximation
Approximation is also more appropriate for special functionlibraries
Michael T. Heath Scientific Computing 5 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
MotivationChoosing InterpolantExistence and Uniqueness
Issues in Interpolation
Arbitrarily many functions interpolate given set of data points
What form should interpolating function have?
How should interpolant behave between data points?
Should interpolant inherit properties of data, such asmonotonicity, convexity, or periodicity?
Are parameters that define interpolating functionmeaningful?
If function and data are plotted, should results be visuallypleasing?
Michael T. Heath Scientific Computing 6 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
MotivationChoosing InterpolantExistence and Uniqueness
Choosing Interpolant
Choice of function for interpolation based on
How easy interpolating function is to work with
determining its parametersevaluating interpolantdifferentiating or integrating interpolant
How well properties of interpolant match properties of datato be fit (smoothness, monotonicity, convexity, periodicity,etc.)
Michael T. Heath Scientific Computing 7 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
MotivationChoosing InterpolantExistence and Uniqueness
Functions for Interpolation
Families of functions commonly used for interpolationinclude
PolynomialsPiecewise polynomialsTrigonometric functionsExponential functionsRational functions
For now we will focus on interpolation by polynomials andpiecewise polynomials
We will consider trigonometric interpolation (DFT) later
Michael T. Heath Scientific Computing 8 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
MotivationChoosing InterpolantExistence and Uniqueness
Basis Functions
Family of functions for interpolating given data points isspanned by set of basis functions φ1(t), . . . , φn(t)
Interpolating function f is chosen as linear combination ofbasis functions,
f(t) =n∑
j=1
xjφj(t)
Requiring f to interpolate data (ti, yi) means
f(ti) =n∑
j=1
xjφj(ti) = yi, i = 1, . . . ,m
which is system of linear equations Ax = y for n-vector xof parameters xj , where entries of m× n matrix A aregiven by aij = φj(ti)
Michael T. Heath Scientific Computing 9 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
MotivationChoosing InterpolantExistence and Uniqueness
Existence, Uniqueness, and Conditioning
Existence and uniqueness of interpolant depend onnumber of data points m and number of basis functions n
If m > n, interpolant usually doesn’t exist
If m < n, interpolant is not unique
If m = n, then basis matrix A is nonsingular provided datapoints ti are distinct, so data can be fit exactly
Sensitivity of parameters x to perturbations in datadepends on cond(A), which depends in turn on choice ofbasis functions
Michael T. Heath Scientific Computing 10 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Polynomial Interpolation
Simplest and most common type of interpolation usespolynomials
Unique polynomial of degree at most n− 1 passes throughn data points (ti, yi), i = 1, . . . , n, where ti are distinct
There are many ways to represent or compute interpolatingpolynomial, but in theory all must give same result
< interactive example >
Michael T. Heath Scientific Computing 11 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Monomial Basis
Monomial basis functions
φj(t) = tj−1, j = 1, . . . , n
give interpolating polynomial of form
pn−1(t) = x1 + x2t + · · ·+ xntn−1
with coefficients x given by n× n linear system
Ax =
1 t1 · · · tn−1
1
1 t2 · · · tn−12
......
. . ....
1 tn · · · tn−1n
x1
x2...
xn
=
y1
y2...
yn
= y
Matrix of this form is called Vandermonde matrix
Michael T. Heath Scientific Computing 12 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Example: Monomial Basis
Determine polynomial of degree two interpolating threedata points (−2,−27), (0,−1), (1, 0)
Using monomial basis, linear system is
Ax =
1 t1 t211 t2 t221 t3 t23
x1
x2
x3
=
y1
y2
y3
= y
For these particular data, system is1 −2 41 0 01 1 1
x1
x2
x3
=
−27−1
0
whose solution is x =
[−1 5 −4
]T , so interpolatingpolynomial is
p2(t) = −1 + 5t− 4t2
Michael T. Heath Scientific Computing 13 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Monomial Basis, continued
< interactive example >
Solving system Ax = y using standard linear equationsolver to determine coefficients x of interpolatingpolynomial requires O(n3) work
Michael T. Heath Scientific Computing 14 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Monomial Basis, continued
For monomial basis, matrix A is increasingly ill-conditionedas degree increases
Ill-conditioning does not prevent fitting data points well,since residual for linear system solution will be small
But it does mean that values of coefficients are poorlydetermined
Both conditioning of linear system and amount ofcomputational work required to solve it can be improved byusing different basis
Change of basis still gives same interpolating polynomialfor given data, but representation of polynomial will bedifferent
Michael T. Heath Scientific Computing 15 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Monomial Basis, continued
Conditioning with monomial basis can be improved byshifting and scaling independent variable t
φj(t) =(
t− c
d
)j−1
where, c = (t1 + tn)/2 is midpoint and d = (tn − t1)/2 ishalf of range of data
New independent variable lies in interval [−1, 1], which alsohelps avoid overflow or harmful underflow
Even with optimal shifting and scaling, monomial basisusually is still poorly conditioned, and we must seek betteralternatives
< interactive example >
Michael T. Heath Scientific Computing 16 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Evaluating Polynomials
When represented in monomial basis, polynomial
pn−1(t) = x1 + x2t + · · ·+ xntn−1
can be evaluated efficiently using Horner’s nestedevaluation scheme
pn−1(t) = x1 + t(x2 + t(x3 + t(· · · (xn−1 + txn) · · · )))
which requires only n additions and n multiplications
For example,
1− 4t + 5t2 − 2t3 + 3t4 = 1 + t(−4 + t(5 + t(−2 + 3t)))
Other manipulations of interpolating polynomial, such asdifferentiation or integration, are also relatively easy withmonomial basis representation
Michael T. Heath Scientific Computing 17 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Lagrange Interpolation
For given set of data points (ti, yi), i = 1, . . . , n, Lagrangebasis functions are defined by
`j(t) =n∏
k=1,k 6=j
(t− tk) /
n∏k=1,k 6=j
(tj − tk), j = 1, . . . , n
For Lagrange basis,
`j(ti) ={
1 if i = j0 if i 6= j
, i, j = 1, . . . , n
so matrix of linear system Ax = y is identity matrix
Thus, Lagrange polynomial interpolating data points (ti, yi)is given by
pn−1(t) = y1`1(t) + y2`2(t) + · · ·+ yn`n(t)
Michael T. Heath Scientific Computing 18 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Lagrange Basis Functions
< interactive example >
Lagrange interpolant is easy to determine but moreexpensive to evaluate for given argument, compared withmonomial basis representation
Lagrangian form is also more difficult to differentiate,integrate, etc.
Michael T. Heath Scientific Computing 19 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Example: Lagrange Interpolation
Use Lagrange interpolation to determine interpolatingpolynomial for three data points (−2,−27), (0,−1), (1, 0)
Lagrange polynomial of degree two interpolating threepoints (t1, y1), (t2, y2), (t3, y3) is given by p2(t) =
y1(t− t2)(t− t3)
(t1 − t2)(t1 − t3)+ y2
(t− t1)(t− t3)(t2 − t1)(t2 − t3)
+ y3(t− t1)(t− t2)
(t3 − t1)(t3 − t2)
For these particular data, this becomes
p2(t) = −27t(t− 1)
(−2)(−2− 1)+ (−1)
(t + 2)(t− 1)(2)(−1)
Michael T. Heath Scientific Computing 20 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Newton Interpolation
For given set of data points (ti, yi), i = 1, . . . , n, Newtonbasis functions are defined by
πj(t) =j−1∏k=1
(t− tk), j = 1, . . . , n
where value of product is taken to be 1 when limits make itvacuous
Newton interpolating polynomial has form
pn−1(t) = x1 + x2(t− t1) + x3(t− t1)(t− t2) +· · ·+ xn(t− t1)(t− t2) · · · (t− tn−1)
For i < j, πj(ti) = 0, so basis matrix A is lower triangular,where aij = πj(ti)
Michael T. Heath Scientific Computing 21 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Newton Basis Functions
< interactive example >
Michael T. Heath Scientific Computing 22 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Newton Interpolation, continued
Solution x to system Ax = y can be computed byforward-substitution in O(n2) arithmetic operations
Moreover, resulting interpolant can be evaluated efficientlyfor any argument by nested evaluation scheme similar toHorner’s method
Newton interpolation has better balance between cost ofcomputing interpolant and cost of evaluating it
Michael T. Heath Scientific Computing 23 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Example: Newton Interpolation
Use Newton interpolation to determine interpolatingpolynomial for three data points (−2,−27), (0,−1), (1, 0)
Using Newton basis, linear system is1 0 01 t2 − t1 01 t3 − t1 (t3 − t1)(t3 − t2)
x1
x2
x3
=
y1
y2
y3
For these particular data, system is1 0 0
1 2 01 3 3
x1
x2
x3
=
−27−1
0
whose solution by forward substitution isx =
[−27 13 −4
]T , so interpolating polynomial is
p(t) = −27 + 13(t + 2)− 4(t + 2)tMichael T. Heath Scientific Computing 24 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Newton Interpolation, continued
If pj(t) is polynomial of degree j − 1 interpolating j givenpoints, then for any constant xj+1,
pj+1(t) = pj(t) + xj+1πj+1(t)
is polynomial of degree j that also interpolates same jpoints
Free parameter xj+1 can then be chosen so that pj+1(t)interpolates yj+1,
xj+1 =yj+1 − pj(tj+1)
πj+1(tj+1)
Newton interpolation begins with constant polynomialp1(t) = y1 interpolating first data point and thensuccessively incorporates each remaining data point intointerpolant < interactive example >
Michael T. Heath Scientific Computing 25 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Divided Differences
Given data points (ti, yi), i = 1, . . . , n, divided differences,denoted by f [ ], are defined recursively by
f [t1, t2, . . . , tk] =f [t2, t3, . . . , tk]− f [t1, t2, . . . , tk−1]
tk − t1
where recursion begins with f [tk] = yk, k = 1, . . . , n
Coefficient of jth basis function in Newton interpolant isgiven by
xj = f [t1, t2, . . . , tj ]
Recursion requires O(n2) arithmetic operations to computecoefficients of Newton interpolant, but is less prone tooverflow or underflow than direct formation of triangularNewton basis matrix
Michael T. Heath Scientific Computing 26 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Orthogonal Polynomials
Inner product can be defined on space of polynomials oninterval [a, b] by taking
〈p, q〉 =∫ b
ap(t)q(t)w(t)dt
where w(t) is nonnegative weight function
Two polynomials p and q are orthogonal if 〈p, q〉 = 0
Set of polynomials {pi} is orthonormal if
〈pi, pj〉 ={
1 if i = j0 otherwise
Given set of polynomials, Gram-Schmidt orthogonalizationcan be used to generate orthonormal set spanning samespace
Michael T. Heath Scientific Computing 27 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Orthogonal Polynomials, continued
For example, with inner product given by weight functionw(t) ≡ 1 on interval [−1, 1], applying Gram-Schmidtprocess to set of monomials 1, t, t2, t3, . . . yields Legendrepolynomials
1, t, (3t2 − 1)/2, (5t3 − 3t)/2, (35t4 − 30t2 + 3)/8,
(63t5 − 70t3 + 15t)/8, . . .
first n of which form an orthogonal basis for space ofpolynomials of degree at most n− 1
Other choices of weight functions and intervals yield otherorthogonal polynomials, such as Chebyshev, Jacobi,Laguerre, and Hermite
Michael T. Heath Scientific Computing 28 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Orthogonal Polynomials, continued
Orthogonal polynomials have many useful properties
They satisfy three-term recurrence relation of form
pk+1(t) = (αkt + βk)pk(t)− γkpk−1(t)
which makes them very efficient to generate and evaluate
Orthogonality makes them very natural for least squaresapproximation, and they are also useful for generatingGaussian quadrature rules, which we will see later
Michael T. Heath Scientific Computing 29 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Chebyshev Polynomials
kth Chebyshev polynomial of first kind, defined on interval[−1, 1] by
Tk(t) = cos(k arccos(t))
are orthogonal with respect to weight function (1− t2)−1/2
First few Chebyshev polynomials are given by
1, t, 2t2− 1, 4t3− 3t, 8t4− 8t2 + 1, 16t5− 20t3 + 5t, . . .
Equi-oscillation property : successive extrema of Tk areequal in magnitude and alternate in sign, which distributeserror uniformly when approximating arbitrary continuousfunction
Michael T. Heath Scientific Computing 30 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Chebyshev Basis Functions
< interactive example >
Michael T. Heath Scientific Computing 31 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Chebyshev PointsChebyshev points are zeros of Tk, given by
ti = cos(
(2i− 1)π2k
), i = 1, . . . , k
or extrema of Tk, given by
ti = cos(
iπ
k
), i = 0, 1, . . . , k
Chebyshev points are abscissas of points equally spacedaround unit circle in R2
Chebyshev points have attractive properties forinterpolation and other problems
Michael T. Heath Scientific Computing 32 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Interpolating Continuous Functions
If data points are discrete sample of continuous function,how well does interpolant approximate that functionbetween sample points?
If f is smooth function, and pn−1 is polynomial of degree atmost n− 1 interpolating f at n points t1, . . . , tn, then
f(t)− pn−1(t) =f (n)(θ)
n!(t− t1)(t− t2) · · · (t− tn)
where θ is some (unknown) point in interval [t1, tn]
Since point θ is unknown, this result is not particularlyuseful unless bound on appropriate derivative of f isknown
Michael T. Heath Scientific Computing 33 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Interpolating Continuous Functions, continued
If |f (n)(t)| ≤ M for all t ∈ [t1, tn], andh = max{ti+1 − ti : i = 1, . . . , n− 1}, then
maxt∈[t1,tn]
|f(t)− pn−1(t)| ≤Mhn
4n
Error diminishes with increasing n and decreasing h, butonly if |f (n)(t)| does not grow too rapidly with n
< interactive example >
Michael T. Heath Scientific Computing 34 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
High-Degree Polynomial Interpolation
Interpolating polynomials of high degree are expensive todetermine and evaluate
In some bases, coefficients of polynomial may be poorlydetermined due to ill-conditioning of linear system to besolved
High-degree polynomial necessarily has lots of “wiggles,”which may bear no relation to data to be fit
Polynomial passes through required data points, but it mayoscillate wildly between data points
Michael T. Heath Scientific Computing 35 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Convergence
Polynomial interpolating continuous function may notconverge to function as number of data points andpolynomial degree increases
Equally spaced interpolation points often yieldunsatisfactory results near ends of interval
If points are bunched near ends of interval, moresatisfactory results are likely to be obtained withpolynomial interpolation
Use of Chebyshev points distributes error evenly andyields convergence throughout interval for any sufficientlysmooth function
Michael T. Heath Scientific Computing 36 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Example: Runge’s Function
Polynomial interpolants of Runge’s function at equallyspaced points do not converge
< interactive example >
Michael T. Heath Scientific Computing 37 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Example: Runge’s Function
Polynomial interpolants of Runge’s function at Chebyshevpoints do converge
< interactive example >
Michael T. Heath Scientific Computing 38 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Monomial, Lagrange, and Newton InterpolationOrthogonal PolynomialsAccuracy and Convergence
Taylor Polynomial
Another useful form of polynomial interpolation for smoothfunction f is polynomial given by truncated Taylor series
pn(t) = f(a)+f ′(a)(t−a)+f ′′(a)
2(t−a)2+· · ·+f (n)(a)
n!(t−a)n
Polynomial interpolates f in that values of pn and its first nderivatives match those of f and its first n derivativesevaluated at t = a, so pn(t) is good approximation to f(t)for t near a
We have already seen examples in Newton’s method fornonlinear equations and optimization
< interactive example >
Michael T. Heath Scientific Computing 39 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
Piecewise Polynomial Interpolation
Fitting single polynomial to large number of data points islikely to yield unsatisfactory oscillating behavior ininterpolant
Piecewise polynomials provide alternative to practical andtheoretical difficulties with high-degree polynomialinterpolation
Main advantage of piecewise polynomial interpolation isthat large number of data points can be fit with low-degreepolynomials
In piecewise interpolation of given data points (ti, yi),different function is used in each subinterval [ti, ti+1]
Abscissas ti are called knots or breakpoints, at whichinterpolant changes from one function to another
Michael T. Heath Scientific Computing 40 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
Piecewise Interpolation, continued
Simplest example is piecewise linear interpolation, inwhich successive pairs of data points are connected bystraight lines
Although piecewise interpolation eliminates excessiveoscillation and nonconvergence, it appears to sacrificesmoothness of interpolating function
We have many degrees of freedom in choosing piecewisepolynomial interpolant, however, which can be exploited toobtain smooth interpolating function despite its piecewisenature
< interactive example >
Michael T. Heath Scientific Computing 41 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
Hermite Interpolation
In Hermite interpolation, derivatives as well as values ofinterpolating function are taken into account
Including derivative values adds more equations to linearsystem that determines parameters of interpolatingfunction
To have unique solution, number of equations must equalnumber of parameters to be determined
Piecewise cubic polynomials are typical choice for Hermiteinterpolation, providing flexibility, simplicity, and efficiency
Michael T. Heath Scientific Computing 42 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
Hermite Cubic Interpolation
Hermite cubic interpolant is piecewise cubic polynomialinterpolant with continuous first derivative
Piecewise cubic polynomial with n knots has 4(n− 1)parameters to be determined
Requiring that it interpolate given data gives 2(n− 1)equations
Requiring that it have one continuous derivative gives n− 2additional equations, or total of 3n− 4, which still leaves nfree parameters
Thus, Hermite cubic interpolant is not unique, andremaining free parameters can be chosen so that resultsatisfies additional constraints
Michael T. Heath Scientific Computing 43 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
Cubic Spline Interpolation
Spline is piecewise polynomial of degree k that is k − 1times continuously differentiable
For example, linear spline is of degree 1 and has 0continuous derivatives, i.e., it is continuous, but notsmooth, and could be described as “broken line”
Cubic spline is piecewise cubic polynomial that is twicecontinuously differentiable
As with Hermite cubic, interpolating given data andrequiring one continuous derivative imposes 3n− 4constraints on cubic spline
Requiring continuous second derivative imposes n− 2additional constraints, leaving 2 remaining free parameters
Michael T. Heath Scientific Computing 44 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
Cubic Splines, continued
Final two parameters can be fixed in various ways
Specify first derivative at endpoints t1 and tn
Force second derivative to be zero at endpoints, whichgives natural spline
Enforce “not-a-knot” condition, which forces twoconsecutive cubic pieces to be same
Force first derivatives, as well as second derivatives, tomatch at endpoints t1 and tn (if spline is to be periodic)
Michael T. Heath Scientific Computing 45 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
Example: Cubic Spline Interpolation
Determine natural cubic spline interpolating three datapoints (ti, yi), i = 1, 2, 3
Required interpolant is piecewise cubic function defined byseparate cubic polynomials in each of two intervals [t1, t2]and [t2, t3]
Denote these two polynomials by
p1(t) = α1 + α2t + α3t2 + α4t
3
p2(t) = β1 + β2t + β3t2 + β4t
3
Eight parameters are to be determined, so we need eightequations
Michael T. Heath Scientific Computing 46 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
Example, continued
Requiring first cubic to interpolate data at end points of firstinterval [t1, t2] gives two equations
α1 + α2t1 + α3t21 + α4t
31 = y1
α1 + α2t2 + α3t22 + α4t
32 = y2
Requiring second cubic to interpolate data at end points ofsecond interval [t2, t3] gives two equations
β1 + β2t2 + β3t22 + β4t
32 = y2
β1 + β2t3 + β3t23 + β4t
33 = y3
Requiring first derivative of interpolant to be continuous att2 gives equation
α2 + 2α3t2 + 3α4t22 = β2 + 2β3t2 + 3β4t
22
Michael T. Heath Scientific Computing 47 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
Example, continued
Requiring second derivative of interpolant function to becontinuous at t2 gives equation
2α3 + 6α4t2 = 2β3 + 6β4t2
Finally, by definition natural spline has second derivativeequal to zero at endpoints, which gives two equations
2α3 + 6α4t1 = 0
2β3 + 6β4t3 = 0
When particular data values are substituted for ti and yi,system of eight linear equations can be solved for eightunknown parameters αi and βi
Michael T. Heath Scientific Computing 48 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
Hermite Cubic vs Spline Interpolation
Choice between Hermite cubic and spline interpolationdepends on data to be fit and on purpose for doinginterpolation
If smoothness is of paramount importance, then splineinterpolation may be most appropriate
But Hermite cubic interpolant may have more pleasingvisual appearance and allows flexibility to preservemonotonicity if original data are monotonic
In any case, it is advisable to plot interpolant and data tohelp assess how well interpolating function capturesbehavior of original data
< interactive example >
Michael T. Heath Scientific Computing 49 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
Hermite Cubic vs Spline Interpolation
Michael T. Heath Scientific Computing 50 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
B-splines
B-splines form basis for family of spline functions of givendegree
B-splines can be defined in various ways, includingrecursion (which we will use), convolution, and divideddifferences
Although in practice we use only finite set of knotst1, . . . , tn, for notational convenience we will assumeinfinite set of knots
· · · < t−2 < t−1 < t0 < t1 < t2 < · · ·Additional knots can be taken as arbitrarily defined pointsoutside interval [t1, tn]
We will also use linear functions
vki (t) = (t− ti)/(ti+k − ti)
Michael T. Heath Scientific Computing 51 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
B-splines, continued
To start recursion, define B-splines of degree 0 by
B0i (t) =
{1 if ti ≤ t < ti+1
0 otherwise
and then for k > 0 define B-splines of degree k by
Bki (t) = vk
i (t)Bk−1i (t) + (1− vk
i+1(t))Bk−1i+1 (t)
Since B0i is piecewise constant and vk
i is linear, B1i is
piecewise linear
Similarly, B2i is in turn piecewise quadratic, and in general,
Bki is piecewise polynomial of degree k
Michael T. Heath Scientific Computing 52 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
B-splines, continued
< interactive example >
Michael T. Heath Scientific Computing 53 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
B-splines, continued
Important properties of B-spline functions Bki
1 For t < ti or t > ti+k+1, Bki (t) = 0
2 For ti < t < ti+k+1, Bki (t) > 0
3 For all t,∑∞
i=−∞Bki (t) = 1
4 For k ≥ 1, Bki has k − 1 continuous derivatives
5 Set of functions {Bk1−k, . . . , B
kn−1} is linearly independent
on interval [t1, tn] and spans space of all splines of degreek having knots ti
Michael T. Heath Scientific Computing 54 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
B-splines, continued
Properties 1 and 2 together say that B-spline functionshave local support
Property 3 gives normalization
Property 4 says that they are indeed splines
Property 5 says that for given k, these functions form basisfor set of all splines of degree k
Michael T. Heath Scientific Computing 55 / 56
InterpolationPolynomial Interpolation
Piecewise Polynomial Interpolation
Piecewise Polynomial InterpolationHermite Cubic InterpolationCubic Spline Interpolation
B-splines, continued
If we use B-spline basis, linear system to be solved forspline coefficients will be nonsingular and banded
Use of B-spline basis yields efficient and stable methodsfor determining and evaluating spline interpolants, andmany library routines for spline interpolation are based onthis approach
B-splines are also useful in many other contexts, such asnumerical solution of differential equations, as we will seelater
Michael T. Heath Scientific Computing 56 / 56
AMS527: Numerical Analysis IIReview for Test 1
Xiangmin Jiao
SUNY Stony Brook
Xiangmin Jiao SUNY Stony Brook AMS527: Numerical Analysis II 1 / 5
Approximations in Scientific Computations
Concepts
Absolute error, relative errorComputational error, propagated data errorTruncation error, rounding errorForward error, backward errorCondition number, stabilityCancellation
Xiangmin Jiao SUNY Stony Brook AMS527: Numerical Analysis II 2 / 5
Solutions of Nonlinear Equations
Concepts
MultiplicitySensitivityConvergence rate
Basic methods
Interval bisection methodFixed-point iterationNewton’s methodSecant method, Broyden’s methodOther Newton-like method
Xiangmin Jiao SUNY Stony Brook AMS527: Numerical Analysis II 3 / 5
Numerical Optimization
Concepts
Unconstrained optimization, constrained optimization (linearvs. nonlinear programming)Global vs. local minimumFirst- and second-order optimality conditionCoercive, convex, unimodality
Methods for unconstrained optimization
Golden section searchNewton’s method, Quasi-Newton methods (basic ideas)Steepest descent, conjugate gradient (basic ideas)
Methods for constrained optimization (especiallyequality-constrained optimization)
Lagrange multiplier for constrained optimizationLagrange function and its solutionLinear programming
Xiangmin Jiao SUNY Stony Brook AMS527: Numerical Analysis II 4 / 5
Polynomial interpolation
Concepts
Existence and uniquenessInterpolation vs. approximationAccuracy; Runge’s phenomena
Methods
Monomial basisLagrange interpolantNewton interpolation and divided differenceOrthogonal polynomials
Xiangmin Jiao SUNY Stony Brook AMS527: Numerical Analysis II 5 / 5