Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
EAD 115
Numerical Solution of Engineering and Scientific Problems
David M. RockeDepartment of Applied Science
Open Methods of Root Finding
• Bracketing methods begin with an interval that is known to contain at least one root.
• These methods are guaranteed to find a root eventually, though they may be slow.
• Open methods begin only with a point guess.
• They are not guaranteed to converge, but may be much faster.
Fig 6.1
Fixed Point Iteration
1
Solve( ) 0
Same as solving
Iteratei
x
x
xi
f x x e
x e
x e
-
-
-+
= - =
=
=
01
10.3679
20.6922
30.5005
40.6062
5
0.56714524
1
0.3679
0.6922
0.5005
0.6062
0.5454
0.567142
x
x e
x e
x e
x e
x e
x e
-
-
-
-
-
-
=
= =
= =
= =
= =
= =
= =
Iteration % Error Iteration % Error1 76.322 9 0.7052 35.135 10 0.3993 22.050 11 0.2274 11.755 12 0.1285 6.894 13 0.0736 3.835 14 0.0417 2.199 15 0.0238 1.239 16 0.013
Convergence
• Each percent error is about 55% of the previous one
• εi+1 ~ 0.55 εi
• If εi+1 ~ c εi with c < 1, we say that the method is linearly convergent.
1
Solve( ) 0
Same as solving
ln( )Iterate
ln( )
x
x
i i
f x x e
x ex x
x x
-
-
+
= - =
==-
=-
0
1
2
1ln(1) 0ln(0)
Oops!This does not lead to a solution.
xxx
=
=- =
=-
Fig 6.2
Fig 6.3
When does this converge?
1
1
( )( )
( ) ( )
i i
r r
r i r i
x g xx g xx x g x g x
+
+
=
=
- = -
1
1
, 1 ,
( ) ( )( ) ( )'( ) (Derivative Mean Value Thm)
( ) '( ) ( ) ( )( ) '( )
'( )
r i r i
r i
r i
r i r i
r i r i
t i t i
x x g x g xg x g xg
x xx x g g x g x
x x x x gE E g
x
xx
x
+
+
+
- = -
-=
-
- = -
- = -
=
Thus the error at iteration i+1 is smaller than the error at iteration i so long as the derivative is less than 1 in absolute value in a neighborhood of the root.
Derivative Mean Value Thm
Fig 6.4
Newton-Raphson
• Beginning with a point, its function value and its derivative, derive a new guess
• Project the tangent line until it intersects the x-axis
• Approximate the function by a first order Taylor series, and solve the approximate problem exactly
Fig 6.5
( ) ( ) '( )( ) 0'( )( ) ( )
( ) / '( )( ) / '( )
i i i
i i i
i i i
i i i
f x f x f x x xf x x x f xx x f x f xx x f x f x
+ - =
- =-
- =-
= -
1
( )'( ) 1
1
x
x
x
i i x
f x e xf x e
e xx xe
-
-
-
+ -
= -
=- -
-= -
- -
step x rel error
0 0 1.00
1 0.5 0.1184
2 0.566311 0.0015
3 0.567143 2×10-7
4 0.567143 5×10-15
Error Analysis
• Newton-Raphson appears to converge quadratically, so that
Ei+1 = c Ei2
• This convergence is extremely rapid, and occurs under modest conditions once the iterate is close to the solution.
2
1
The exact Taylor series expansion of ( )( ) 0 ( ) '( )( ) 0.5 ''( )( )
The Newton-Raphson iteration is based on the truncated series( ) ( ) '( )( )
leading to the iteration
r i i r i r i
i i i
i
f xf x f x f x x x f x x
f x f x f x x x
x x
x
+
= = + - + -
= + -
=
1
21
( ) / '( ) or 0 ( ) '( )( )
0 '( )( ) 0.5 ''( )( )
i i i
i i i i
i r i r i
f x f xf x f x x x
f x x x f x xx+
+
-
= + -
= - + -
21
2, 1 ,
2, 1 ,
0 '( )( ) 0.5 ''( )( )
0 '( ) 0.5 ''( )''( )
2 '( )
i r i r i
i t i t i
t i t ii
f x x x f x x
f x E f EfE Ef x
x
x
x
+
+
+
= - + -
= +
=-
Pitfalls of Newton-Raphson
• When close to a solution, and when the conditions are satisfied (e.g., f’(xr) is not 0), Newton-Raphson converges rapidly.
• When started farther away, it may diverge or converge slowly at first. The quadratic convergence applies only once the solution is close.
Fig 6.6
Safeguards for Newton-Raphson
• On each iteration, one can keep track of |f(x)|. If the new proposed iterate has a larger value of this quantity, then replace it by a step half as big.
• The same trick can be used for keeping the iterations away from undesirable regions (non-positive numbers for the log function). One can simply return a very large number instead of NaN.
xrold = xr
fvold = fv
xinc = -f(xr)/fp(xr)
xrnew = xr + xinc
fv = abs(f(xr))
do while (fv > fvold)
xinc = xinc/2
xrnew = xr + xinc
fv = abs(f(xr))
end do
The Secant Method
• For some functions, it is difficult to calculate the derivative
• We can use a backward difference to approximate the derivative, and then use a Newton-Raphson type calculation
• This is called the secant method
Fig 6.7
1
1
11
1
( ) ( )'( )
( ) ( )( )'( ) ( ) ( )
i ii
i i
i i i ii i i
i i i
f x f xf xx xf x f x x xx x xf x f x f x
-
-
-+
-
--
-= - -
-
•The secant method is similar to Newton-Raphson
•It appears similar to false position, but this is not the case.
Fig 6.8
Fig 6.9
Multiple Roots
• Multiple roots are places where the function and one or more derivatives are zero.
• This is most easily explained for polynomials.
• Non-polynomials can exhibit the same phenomenon
2
2
3
( ) ( 1)( 2)has a simple root at 1 and a double root at 2
'( ) ( 2) 2( 1)( 2)'(2) 0'(1) 0
( ) ( 1)( 2)has a triple root at 2
f x x xx x
f x x x xff
f x x xx
= - -= =
= - + - -=¹
= - -=
• Multiple roots cause potential trouble for all methods of root finding
• Bracketing methods may not work if there is an even multiple root in which the function does not cross the axis.
• An interval in which a continuous function changes sign has at least one root.
• An interval in which a continuous function does not change sign may or may not have a root
• Newton-Raphson and secant methods divide by f’() which is difficult if it is 0.
Fig 6.10
Solving Equations with Excel• Excel has two tools for finding roots of equations• The first is Tools/Goal Seek (2003) Data/Data
Tools/What If/Goal Seek (2007), which can determine what value of one cell makes a second cell take on a given value (Alt-TG still works).
• The second is Tools/Solver (2003), Data/Analysis/Solver (2007) which can take more than one input cell, can take constraints, and can maximize and minimize as well as solve
• Solver must be enabled in Excel Options
Solving Equations with Matlab
• Use the matlab function fzero to find a single root of a function
• fzero uses a combination of open and bracketing search to locate a root.
>> x0=[0 1.3];>> x=fzero(inline('x^10-1'),x0)
x =
1
>> x0=[-1.3 0];>> x=fzero(inline('x^10-1'),x0)
x =
-1
>> x0=0;>> x=fzero(inline('x^10-1'),x0)
x =
-1
Func-count x f(x) Procedure1 0 -1 initial2 -0.0282843 -1 search3 0.0282843 -1 search4 -0.04 -1 search5 0.04 -1 search
......
......12 -0.16 -1 search13 0.16 -1 search14 -0.226274 -1 search15 0.226274 -1 search16 -0.32 -0.999989 search17 0.32 -0.999989 search18 -0.452548 -0.99964 search19 0.452548 -0.99964 search20 -0.64 -0.988471 search21 0.64 -0.988471 search22 -0.905097 -0.631065 search23 0.905097 -0.631065 search24 -1.28 10.8059 search
Looking for a zero in the interval [-1.28, 0.9051]
25 0.784528 -0.911674 interpolation26 -0.247736 -0.999999 bisection27 -0.763868 -0.932363 bisection28 -1.02193 0.242305 bisection29 -0.968701 -0.27239 interpolation30 -0.996873 -0.0308299 interpolation31 -0.999702 -0.00297526 interpolation32 -1 5.53132e-006 interpolation33 -1 -7.41965e-009 interpolation34 -1 -1.88738e-014 interpolation35 -1 0 interpolation
Zero found in the interval: [-1.28, 0.9051].
x =
-1