Lecture 007

Embed Size (px)

Citation preview

  • 7/29/2019 Lecture 007

    1/20

    Roots of Equations 1.0.1 Newton's Method

    Consider the graph of y= fx shown in Figure 0.0. The root a occurs where the graph crosses the x-axis.We will usually have an estimate of a, and it will be denoted here by x0. To improve on this estimate,

    consider the straight line that is tangent to the graph at the point x0, fx0. If x0 is near a, this tangentline should be nearly coincident with the graph of y= fx for points x about a. Then the root of thetangent line should nearly equal a. This root is denoted by x1.

    To find a formula for x1, consider the equation of the line tangent to the graph of y= fx at x0, fx0. It issimply the graph of y= p1x for the linear Taylor polynomial

    p1x = fx0 + f' x0 x- x0.

    By definition, x1 is the root of p1

    x

    . Solving

    fx0 + f' x0 x- x0 = 0

    leads to

    x1 = x0 -fx0f' x0

    .

    Since x1 is expected to be an improvement over x0 as an estimate of a, this entire procedure can be

    repeated with x1 as the initial guess. This leads to the new estimate

    x2 = x1 -fx1

    f' x1

    .

    Repeating this process, we obtain a sequence of numbers x1, x2, x3,that we hope will approach the

    root a. These numbers are called iterates, and they are defined recursively by the following general

    iteration formula:

    xn+1 = xn-fxnf' xn

    n= 0, 1, 2,

    This is Newton's method for solving fx = 0. Some times it is also called Newton-Raphson method.

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    2/20

    x0

    x1 x2

    fx

    -2.5 -2.0 -1.5 -1.0

    -25

    -20

    -15

    -10

    -5

    0

    Figure 0.0. Iteration process of Newton's Method.

    Another way to derive Newtons method is based on the simple facts of two point representations of lines.

    For this purpose, we note that the point-slope form of the tangent line to y= fx at the initial approxima-tion x1 is

    y- fx1 = f' x1 x- x1.

    If f' x1 0, then this line is not parallel to the x-axis and consequently it crosses the x-axis at some pointx2, 0. Substituting the coordinates of this point in the formula above yields

    0 - fx1 = f' x1 x2 - x1.

    Solving for x2 we obtain

    x2 = x1 -fx1f' x1

    The next approximation can be obtained more easily. If we view x2 as the starting approximation and x3the new approximation, we can simply apply the given formula with x2 in place of x1 and x3 in place of x2.

    This yields

    x3 = x2 -fx2f' x2

    .

    provided f ' x2 0. In general, if xn is the nth approximation, then it is evident from the two steps givenabove that the improved approximation xn+1 is given by

    xn+1 = xn-fxnf' xn

    with n= 1, 2, 3, .

    This formula is realized in Mathematicaby the following line

    newtonsMethodf_, x1_ : x f

    x f. x x1

    The replacement x x1 in the function f= fx is necessary because we are dealing with numericalvalues in the calculation. The iteration of this function can be carried out by a special Mathematica

    2 Lecture_007.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    3/20

    function called Nest[] or NestList[]. These function generate a nested expression of the newtonsMethod[]

    function and deliver the approximation of the root. For example if we are going to determine one of the

    roots of the polynomial

    px = x3 - 4 x2 + 5 == 0

    defined as

    px_ := x3 - 4 x2 + 5

    whose graph is given in Figure 0.0

    -2 -1 0 1 2 3 4

    -15

    -10

    -5

    0

    5

    x

    px

    Figure 0.0. Graph of the polynomial px = x3 - 4 x2 + 5.

    If we apply Newton's Method to this polynomial we get for an initial value x1 = 1.155 the following list of 7approximations

    res NestListnewtonsMethodpx, &, 0.155, 7

    0.155, 4.357, 3.82396, 3.64124, 3.61838, 3.61803, 3.61803, 3.61803

    The result is a list of approximations of the root starting with the initial value.

    Example 0.0. Newton's Method I

    Use Newton's Method to find 27

    .

    Solution 0.3. Observe that finding 27

    is equivalent to finding the positive root of the equation

    x7 - 2 = 0

    so we take

    px_ := x7 - 2

    Lecture_007.nb 3

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    4/20

    and apply Newton's Method to this formula with an appropriate initial value x1 = 0.85. The iteration of the

    method delivers

    NestListnewtonsMethodpx, &, 0.85, 17

    0.85, 1.48613, 1.30035, 1.17368, 1.11532, 1.10442, 1.10409, 1.10409, 1.10409,1.10409, 1.10409, 1.10409, 1.10409, 1.10409, 1.10409, 1.10409, 1.10409, 1.10409

    This means that 27

    = 1.10409 .

    Example 0.0. Newton's Method II

    Find the solution of the equation cosx = x.

    Solution 0.4. To apply Newton's method to a function, we are not restricted to polynomials. However, we

    can apply this method to any kind of function which allows a first order derivative. As in the current case,

    we rewrite the equation as

    px_ := cosx - x

    and apply Newton's Method to this expression to get

    res NestListnewtonsMethodpx, &, 0.155, 7

    0.155, 0.876609, 0.742689, 0.739088, 0.739085, 0.739085, 0.739085, 0.739085

    The symbolic expression for this iteration can be found by replacing the numerical initial value by a

    general symbol as in the next line shown. We use x1 as a symbol instead of a number.

    symbolicNewton NestListnewtonsMethodpx, &, x1, 2 Simplify

    x1,Cosx1 Sinx1 x1

    1 Sinx1,

    Cosx1 x11 Sinx1

    x1 Cosx1 CosCosx1 Sinx1 x1

    1 Sinx1 1 Sinx1 Sinx1 x1

    1 Sinx1 1 Sin Cosx1 Sinx1 x1

    1 Sinx1

    The result is a symbolic representation of the nested application of Newton's method and thus represents

    an approximation formula for the root if we insert an initial value x1 into this formula.

    symbolicNewton . x1 0.155

    0.155, 0.876609, 0.742689

    The symbolic formula delivers the same values for the approximation as expected. However, the sym-

    bolic representation of Newton's formula allows us to set up a tree of approximation formulas which can

    be efficiently used for different initial values. The advantage of the symbolic approach is that we get a

    4 Lecture_007.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    5/20

    formula for the approximation which needs only a single numeric value to get the final answer. There are

    for example no rounding errors.

    In the following we will examine the error bounds of Newton's method. Let us assume fx has at leasttwo continuous derivatives for all x in some interval about the root a. Further assume that

    f' (x) != 0.

    This says that the graph of y= fx is not tangent to the x-axis when the graph intersects at x= a. Alsonote that combining f' x 0 with the continuity of f' x implies that f' x 0 for all xnear a.

    To estimate the error we use Taylor's theorem to write

    fa = fxn + a - xn f' xn +1

    2a - xn2 f'' cn

    where cn is an unknown point between a and xn. Note that fa = 0 by assumption, and then divide f' xnto obtain

    0 =fxnf' xn

    + a - xn+ a - xn2f'' cn

    2 f' xn

    solving for a - xn+1, we have

    a - xn+1 = a - xn2-f'' cn2 f' xn

    .

    This formula says that the error in xn+1 is nearly proportional to the square of the error in xn. When the

    initial error is sufficiently small, this shows that the error in the succeeding iterates will decrease very

    rapidly. This formula can also be used to give a formal mathematical proof of the convergence of New-

    ton's method.

    For the estimation of the error, we are computing a sequence of iterates xn, and we would like to esti-

    mate their accuracy to know when to stop the iteration. To estimate a - xn, we note that, since fa = 0,we have

    fxn = fxn - fa = f ' xn xn- a

    for some xn between xn and a, by the mean-value theorem. Solving for the error, we obtain

    a - xn=-fxnf' xn

    -fxnf' xn

    provided that xn is so close to a that f ' xn = f' xn. From Newton's iteration formula this becomes

    a - xn xn+1 - xn.

    This is the standard error estimation formula for Newton's method, and it is usually fairly accurate. The

    following function uses this estimation of errors to terminate the iteration.

    Lecture_007.nb 5

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    6/20

    newtonsMethodf_, x1_ : Blockx1in x1, xnew, 0.00001, generate an infinite loop While0 0,

    Newton's iteration formula xnew x

    f

    x f. x x1in;

    check the error related to 4.43 IfAbsxnew x1in , Returnxnew;x1in Nxnew;Print"x ", xnew

    newtonsMethodx6 x 1, 1

    x 6

    5

    x 1.14358

    x 1.13491

    x 1.13472

    1.13472

    From the discussion above, Newton's method converges more rapidly than the secant method. Thus

    Newton's method should require fewer iterations to attain a given error. However, Newton's method

    requires two function evaluations per iteration, that of fxn and f' xn. And the secant method requiresonly one evaluation, fxn, if it is programmed carefully to retain the value of fxn-1 from the previousiteration. Thus, the secant method will require less time per iteration than the Newton method.

    Fixed-Point Method

    The Newton method and the secant method are examples of one-point and two-point methods, respec-

    tively. In this section, we give a more general introduction to iteration methods, presenting a general

    theory for one-point iteration formulas for a single variable.

    As a motivational example, consider solving the equation

    x2 - 7 = 0

    for the root a = 7 = 2.64575. To find this number we use the same ideas as in Newton's approach to

    set up a general iteration formula which can be stated as

    xn+1 = gxn

    To solve the simple problem (0.0) we introduce four iteration schemes for this equation

    1. xn+1 = 7 + xn- xn2

    6 Lecture_007.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    7/20

    2. xn+1 =7

    xn

    3. xn+1 = 1 + xn-1

    7

    xn2

    4. xn+1 =1

    2xn+

    7

    xn

    As stated above the iterations (0.0-0) all have the form (0.0) for appropriate continuous functions gx.For example, with (0.0) gx = 7 + x- x2.

    The formulas (0.0-0) are represented by the graphs representing gx and its first order derivative,respectively.

    gx

    g'x

    2.0 2.2 2.4 2.6 2.8 3.0

    -4

    -2

    0

    2

    4

    x

    y

    Figure 0.0. Graph of the function gx = 7 + x- x2 and its derivative g' x = 1 - 2 x.

    We can iterate the function gx by using the Mathematica function NestList[] to generate a sequence ofnumbers related to the iteration

    xn+1 = 7 + xn- xn2

    which delivers the following result for a specific initial value x0 = 2.6 as

    NestList7 2 &, 2.6, 6

    2.6, 2.84, 1.7744, 5.6259, 19.0249, 373.972, 140222.

    Assuming x= 2 and x= 3 as the lower and upper boundary of an interval in which the actual root is

    located, we observe that the first order derivative of this function g is larger than one (see Figure 0.0).

    The absolute value of the maximum of g' 2 = 3 which is larger than 1. Take this for the moment as anobservation and remember it in the following discussion. The second iteration formula

    Lecture_007.nb 7

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    8/20

    xn+1 =7

    xn

    is also graphically represented on the interval x 2, 3 in the following Figure 0.0

    gx

    g'x

    2.0 2.2 2.4 2.6 2.8 3.0

    -1

    0

    1

    2

    3

    x

    y

    Figure 0.0. Graph of the function gx = 7 xand its derivative g' x = -7x2.

    Again we can use the function NestList[] to generate a sequence of numbers based on iteration formula

    (0.0)

    NestList7

    &, 2.6, 16

    2.6, 2.69231, 2.6, 2.69231, 2.6, 2.69231, 2.6, 2.69231,2.6, 2.69231, 2.6, 2.69231, 2.6, 2.69231, 2.6, 2.69231, 2.6

    In this case we observe that the sequence using the same initial value x0 = 2.6 oscillated between two

    values which are enclosing the root we are looking for. However, contrary to the previous sequence the

    current sequence does not diverge. If we examine the first order derivative of this iteration we observe

    that the magnitude g' x is bounded and the maximum of this value is greater than one (see Figure0.0). The next Figure 0.0 shows the graph of gx for the following iteration

    xn+1 = 1 + xn-1

    7xn

    2

    8 Lecture_007.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    9/20

    gx

    g'x

    2.0 2.2 2.4 2.6 2.8 3.0

    0.0

    0.5

    1.0

    1.5

    2.0

    2.5

    x

    y

    Figure 0.0. Graph of the function g

    x

    = 1 + x-

    1

    7

    x2 and its derivative g'

    x

    = 1 -

    2

    7

    x.

    The related sequence of (0.0) shows convergent to a single value which in fact represents to a certain

    accuracy the root of x2 = 7.

    NestList 1 1

    7

    2 &, 2.5, 6

    2.5, 2.60714, 2.63612, 2.64339, 2.64517, 2.64561, 2.64572

    For this iteration formula we also observe that the magnitude of the first order derivative of g' is

    smaller than one (compare Figure 0.0). For the iteration formula (0.0)

    xn+1 =1

    2xn+

    7

    xn

    gx and g' x on the interval x 2, 3 is shown in the following Figure 0.0.

    gx

    g'x

    2.0 2.2 2.4 2.6 2.8 3.0

    0.0

    0.5

    1.0

    1.5

    2.0

    2.5

    x

    y

    Lecture_007.nb 9

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    10/20

    Figure 0.0. Graph of the function gx = 12

    x+ 7x and its derivative g' x = 1

    21 - 7

    x2.

    The generation of the sequence using the initial value x0 = 2.6 shows

    NestList 12

    7

    &, 2.6, 6

    2.6, 2.64615, 2.64575, 2.64575, 2.64575, 2.64575, 2.64575

    that we approach the same value as for the iteration formula (0.0). Here again the maximum of the first

    order derivative of g' x is smaller than one.

    All four iterations have the property that if the sequence xn n 0 has a limit a, then a is a root of thedefining equation. For each equation, we check this as follows: Replace xn and xn+1 by a, and then show

    that this implies a = 7 . The next lines show you the results of this calculation for the different cases,

    respectively.

    Solve-a2 + a + 7 a, a

    a - 7 , a 7

    Solve7

    a

    a, a

    a - 7 , a 7

    Solve-a2

    7+ a + 1 a, a

    a - 7 , a 7

    Solve1

    2a +

    7

    a

    a, a

    a - 7 , a 7

    To explain this results, we are now going to discuss a general theory for one-point iteration formulas

    which explains all the observed facts.

    The iterations (0.0-0) all have the same form

    xn+1 = gxn

    for appropriate continuous functions gx. If the iterates xn converge to a point a, then

    10 Lecture_007.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    11/20

    limnxn+1 = limngxn

    a = g(a).

    Thus a is a solution of the equation x= gx, and a is called a fixed point of the function g. We call (0.0) afixed point equation.

    The next step is to set up a general approach to explain when the iteration xn+1 = gxn will converge to afixed point of g. We begin with a lemma on the existence of solutions of x= gx.

    Corollary Fix point Existence

    Let gx be a continuous function on an interval a, b, and suppose gsatisfies the property

    a

  • 7/29/2019 Lecture 007

    12/20

    a

    b

    function 7+x-x2 7

    x1+x-

    x2

    7

    1

    2 7x

    +x

    Figure 0.0. Representation of the fix point lemma for the different functions used in the iterations (0.0-0).

    The observations so far made are formulated in the following theorem.

    Theorem 0.0. Contraction Mapping

    Assume gx and g' x are continuous for a x b, and assume g satisfies the conditions of Corollary0.0. Further assume that

    l = maxaxb g' x < 1Then the following statements hold

    S1: There is a unique solution a of x= gx in the interval a, b.

    S2: For any initial estimate x0 in a, b, the iterates xn will converge to a.

    S3: a - xn ln

    1-lx0 - x1 , n 0

    12 Lecture_007.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    13/20

    S4: limna-xn+1

    a-xn= g' a.

    Thus for xn close to a

    a - xn+1 g'

    a

    a - xn

    .

    Proof 0.3. There is some useful information in the proof, so we go through most of the details of it. Note

    first that the hypotheses on gallow us to use Corollary 0.0 to assert the existence of at least one solution

    to x= gx. In addition, using the mean value theorem, we have that for any two points wand z in a, b,

    g(w) - g(z) = g' (c) (w - z)

    for some cbetween wand z. Using the property of l in this equation, we obtain

    | g(w) - g(z) | = | g' (c) || w - z |

  • 7/29/2019 Lecture 007

    14/20

    1 - l a - x0 x1 - x0

    a - x0 1

    1 - lx1 - x0 .

    combining this with the final result of S2 we can conclude that

    a - xn ln

    1 - lx0 - x1 , n 0.

    S4: We use a - xn+1 = ga - gxn = g' cn a - xn to write

    limna - xn+1

    a - xn= limng' cn.

    Each cn is between a and xn, and xn a, by S2. Thus, cn a. Combining this with the continuity of the

    function g' x to obtain

    limng' cn = g' athus finishes the proof.

    QED

    We need a more precise way to deal with the concept of the speed of convergence of an iteration

    method. We say that a sequence xn n 0 converges to a with an order of convergence p 1 if

    a - xn+1 c a - xnp, n 0

    for some constant c 0. The cases p= 1, p= 2, and p= 3 are referred to as linear, quadratic, and cubic

    convergence, respectively. Newton's method usually converges quadratically; and the secant method has

    order of convergence p= 1 + 5 2. For linear convergence, we make the additional requirement thatc< 1; as otherwise, the error a - xn need to converge to zero.

    If g' a < 1 in the preceding theorem, then the relation a - xn+1 l a - xn shows that the iteratesxn are linearly convergent. If in addition, g' a 0, then the relation a - xn+1 g' a a - xn proves theconvergence is exactly linear, with no higher order of convergence being possible. In this case, we call

    the value g' a the linear rate of convergence.

    In practice Theorem 0.0 is seldom used directly. The main reason is that it is difficult to find an interval

    a, b for which the conditions of the Corollary is satisfied. Instead, we look for a way to use the theoremin a practical way. The key idea is the result a - xn+1 = g' a a - xn, which shows how the iteration errorbehaves when the iterates xn are near a.

    Corollary 0.0. Convergence of the Fix-Point Method

    Assume that gx and g' x are continuous for some interval c< x< d, with the fixed point a contained inthe interval. Moreover, assume that

    | g' (a) | < 1.

    Then, there is an interval a, b around a for which the hypotheses, and hence also the conclusion, of

    14 Lecture_007.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    15/20

    Theorem 0.0 are true. And if to be contrary, g' a > 1, then the iteration method xn+1 = gxn will notconverge to a. When g' a = 1 no conclusion can be drawn.

    If we check the iteration formulas (0.0-0) by this corollary we observe that the first and second iteration

    scheme is not converging to the real root. This behavior is one of the shortcomings of the fix point

    method. In general fixed point methods are only used in practice if we know the interval in which the fixedpoint is located and if we have a function gx available satisfying the requirements of Theorem 0.0. Thefollowing examples demonstrate the application of the fixed point theorem.

    Example 0.0. Fixed Point Method I

    Let gx = x2 - 15 on -1, 1. The Extreme Value Theorem implies that the absolute minimum of goccurs at x= 0 and g0 = -1 5. Similarly, the absolute maximum of goccurs at x= 1 and has the valueg1 = 0. Moreover, g is continuous and

    g' x =2 x

    5

    2

    5, for all x -1, 1.

    So gsatisfies all the hypotheses of Theorem 0.0 and has a unique fixed point in -1, 1.

    Solution 0.5. In this example, the unique fixed point x in the interval -1, 1 can be determined alge-braically. If

    a = ga =a2 - 1

    5then a2 - 5 a - 1 = 0

    which, by the quadratic formula, implies that

    solfp = Solvea2 - 5a - 1 0, a

    a 12

    5 - 29 , a 12

    5 + 29

    gx_ :=1

    5x2 - 1

    g x

    .x 4

    8

    5

    Note that g actually has two fixed points at x =1

    25 29 . However, the second solution of {{x

    -0.19258240356725187}, {x 5.192582403567252}} is not located in the interval we selected but

    included in the interval 4, 6. This second solution of the fixed point equation does not satisfy the assump-tions of Theorem 0.0, since g' 4 = 85 > 1. Hence, the hypotheses of Theorem 0.0 are sufficient toguarantee a unique fixed point but are not necessary (see Figure 0.0).

    Lecture_007.nb 15

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    16/20

    -1 0 1 2 3 4 5 6

    0

    2

    4

    6

    x

    gx

    1

    25- 29

    1

    25+ 29

    Figure 0.0. Fixed point equation g

    x

    =

    x2 - 1

    5 and the fixed points in the interval

    -1, 1

    .

    To demonstrate the use of the fixed point theorem in finding roots of equations let us examine the follow-

    ing example.

    Example 0.0. Fixed Point Method II

    The equation x3 + 4 x2 - 10 = 0 has a unique root in 1, 2. There are many ways to change the equationto the fixed-point form x= gx using simple algebraic manipulation. We select the following representa-tion of the fixed point equation with

    gx =10

    4 + x

    1

    2

    .

    Solution 0.7. Using the function gx as represented in (0.0) we first have to check the prerequisites ofTheorem 0.0. The function

    gx_ :=10

    x + 4

    assumes the following values at the boundaries of the interval

    g1, g2

    2 ,5

    3

    showing that the interval 1, 2 is mapped to itself. The first order derivative of gx generates the values

    16 Lecture_007.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    17/20

    g x

    .x 1,

    .x 2

    - 15 2

    , -

    5

    3

    12

    representing values which are smaller than one in magnitude. Thus the assumptions of Theorem 0.0 are

    satisfied and the fixed point is reached by a direct iteration to be

    NestList10

    4

    12&, 1., 7

    1., 1.41421, 1.35904, 1.36602, 1.36513, 1.36524, 1.36523, 1.36523

    which represents the root located in the interval 1, 2 (see Figure 0.0). A sufficient accuracy of the root isreached within a few iteration steps.

    1.0 1.2 1.4 1.6 1.8 2.0

    1.0

    1.2

    1.4

    1.6

    1.8

    2.0

    x

    gx

    Figure 0.0. Fixed point equation gx = 104+x

    12 and the fixed points in the interval 1, 2.

    So for we did not talk about the accuracy of the fixed point method. Actually the accuracy of the method

    is determined by part S3 of Theorem 0.0. This relation relates the actual fix point to the rate of conver-

    gence rate l

    a - xn ln

    1 - lx0 - x1 , n 0

    if we know the accuracy which is given by a - xn = e in the nth iteration we are able to estimate the

    number of iterations by

    Lecture_007.nb 17

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    18/20

    Solveeln x0 - x1

    1 - l, n

    n log e 1-lx0-x1

    logl

    This formula needs two of the iteration steps x0 and x1 and the convergence rate l = maxg' x withx a, b.

    Example 0.0. Fixed Point Method III

    We will estimate the number of iterations for the fixed point problem

    x= 2-x with x 1 3, 1.

    We are interested in an accuracy of e = 10-5.

    Solution 0.8. The solution of equation (0.0) tells us that the number of iterations are related to the accu-

    racy e and the convergence rate of the fixed point equation. The convergence rate l is determined for

    this equation by defining gas

    gx_ := 2-x

    and its derivative

    derg =g x

    -2-x log

    2

    For the given interval we find

    l = max derg .x1

    3, derg .x 1

    log22

    which is smaller than one. The first two iterations of the given fixed point equation follow by

    initials= NestList2-1 &,1

    3, 1

    1

    3,

    1

    23

    The magnitude of the difference of these two values are

    18 Lecture_007.nb

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    19/20

    absdiff= Subtract initials

    1

    23

    -1

    3

    Using these information in the formula derived above we find

    log 1-l105 absdiff

    logl

    log1-

    log22

    1000001

    23

    -1

    3

    log log22

    which is about 10 to 11 iterations. We can check this by comparing the symbolic fixed point solution with

    the numeric solution. The symbolic fixed point follows by solving the fixed point equation x= 2-x as

    shown next

    solution = FlattenSolvex 2-x, x

    x Wlog2

    log2

    The numerical iteration delivers the following results

    numsol = NestList2-1 &,1.

    3, 15

    0.333333, 0.793701, 0.576863, 0.67042, 0.628324, 0.646928, 0.638639,0.642319, 0.640682, 0.641409, 0.641086, 0.64123, 0.641166, 0.641194, 0.641182, 0.641187

    The magnitude of the difference between the exact fixed point a and the iterations is shown in Figure 0.0

    Lecture_007.nb 19

    2012 G. Baumann

  • 7/29/2019 Lecture 007

    20/20

    0 5 10 15

    10-5

    10-4

    0.001

    0.01

    0.1

    1

    n

    a-xn

    Figure 0.0. Error of the fixed point equation x= 2-x represented in a logarithmic plot. The error

    a - xn = e decreases continuously with an exponential decay. The error level 10-5 is reached after 14iterations which is in agreement with the estimation given above.

    20 Lecture_007.nb