Revision of Numerical Integration Schemes

Embed Size (px)

Citation preview

  • 7/27/2019 Revision of Numerical Integration Schemes

    1/7

    Revision of ODEs for CompPhys

    E

    July 16, 2013

    Odinary Differential Equations

    ODEs are quite easy.

    Numerical Solutions thereto

    Solving ODEs numerically is necessarily a matter of approximation, since computers are not continuousmachines but use discrete numbers, and they additionally cannot deal with arbitrary-precision numbers1

    and so the whole integral has to be discretized. Both of these effects have a role to play in the deviation ofthe algorithm from the true value of the integral.

    We only need study 1st order ODEs

    We can decompose an arbitrarily high order differential equation into a system of first order ODEs tosolve using the algorithms to follow. For example

    y(n) = f({y(k)}k=0,...,n1, x) (1)

    can be manipulated into a series of first order equations by defining

    y0 = y; yk = yk+1, k = 0,...,n 2 (2)

    which gives us

    y0 = y1

    y1 = y2

    ... = ...

    yn2 = yn1

    yn1 = f({yk}k=0,...,n1, x) (3)

    In vector form, this is just y = f(y, x).

    Integration AlgorithmsEuler Method

    Eulers method is pretty dumb, but it gets the job done.2

    1Well they obviously can if you want them to but the libraries are so slow they may as well not.2Actually it doesnt.

    1

  • 7/27/2019 Revision of Numerical Integration Schemes

    2/7

  • 7/27/2019 Revision of Numerical Integration Schemes

    3/7

    Illustration of the midpoint method assuming that yn equals the exact value y(tn). The midpoint methodcomputes yn+1 so that the red chord is approximately parallel to the tangent line at the midpoint (the greenline).

    the formula for the backward Euler method has yn+1 on both sides, so when applying the backward Eulermethod we have to solve an equation. This makes the implementation more costly.

    Other modifications of the Euler method that help with stability yield the exponential Euler method orthe semi-implicit Euler method.

    Taylor Expansion Method

    The Euler method can be thought of as a first-order Taylor expansion method:

    The local error of the Taylor-expansion algorithm of order p is O(hp+1), the global error is O(hp). Themain disadvantage of this approach is that it requires recursively computing possibly high partial derivativesof f(y, x).

    Midpoint Method

    Further modification of the Euler method leads to the Midpoint method.

    yn+1 = yn + hf

    tn +12h, yn +

    12hf(tn, yn)

    (4)

    The name of the method comes from the fact that in the formula above the function f is evaluated att = tn + h/2, which is the midpoint between tn at which the value of y(t) is known and tn+1 at which thevalue of y(t) needs to be found.

    The local error at each step of the midpoint method is of order O

    h3

    , giving a global error of order

    O

    h2

    . Thus, while more computationally intensive than Eulers method, the midpoint method generallygives more accurate results.

    The method is an example of a class of higher-order methods known as Runge-Kutta methods.

    3

  • 7/27/2019 Revision of Numerical Integration Schemes

    4/7

    Illustration of numerical integration for the equation y = y, y(0) = 1. Blue is the Euler method; green, themidpoint method; red, the exact solution, y = et. The step size is h = 1.

    Leapfrog Method

    Leapfrog leaves you waiting for more, testament to the fact it performs so poor.

    Good for differential equations of the form x = F(x) or equivalently v = F(x), x v, particularly in thecase of a dynamical system of classical mechanics. Such problems often take the form

    x = V(x)

    Leapfrog integration is equivalent to updating positions x(t) and velocities v(t) = x(t) at interleaved timepoints, staggered in such a way that they leapfrog over each other. For example, the position is updatedat integer time steps and the velocity is updated at integer-plus-a-half time steps.

    Leapfrog integration is a second order method, in contrast to Euler integration, which is only firstorder, yet requires the same number of function evaluations per step. Unlike Euler integration, it is stablefor oscillatory motion, as long as the time-step t is constant, and t 2/.

    Why its useful

    It is time-reversible: One can integrate forwardn steps, and then reverse the direction of integrationand integrate backwards n steps to arrive at the same starting position.

    It has a symplectic nature, which implies that it conserves the (slightly modified) energy of dynam-ical systems. This is especially useful when computing orbital dynamics, as other integrationschemes, such as the Runge-Kutta method, do not conserve energy and allow the system to drift sub-

    stantially over time.

    Verlet Integration

    Verlet integration is a numerical method used to integrate Newtons equations of motion. It is frequentlyused to calculate trajectories of particles in molecular dynamics simulations and computer graphics.

    The Verlet integrator offers greater stability, as well as other properties that are important in physicalsystems such as time-reversibility and preservation of the symplectic form on phase space, at no significantadditional cost over the simple Euler method.

    4

  • 7/27/2019 Revision of Numerical Integration Schemes

    5/7

    RK4 in pictorial form.

    Error The local error in position of the Verlet integrator is O(t4), and the local error in velocity isO(t2).

    The global error in position, in contrast, is O(t2) and the global error in velocity is O(t2).Read more about Verlet.

    Numerov

    Numerovs algorithm uses Taylor-expansion ideas and the particular structure of the ODE in question.It is good for equations of the form y(x) + k(x)y(x) = 0. Equations of such a form include manipulations

    of the time-indepedent Schrodinger eqn, for example.1 +

    1

    12h2kn+1

    yn+1 = 2

    1

    5

    12h2kn

    yn

    1 +

    1

    12h2kn1

    yn1 + O(h

    6) (5)

    Also clear is it provides 6th order accuracy.

    Runge-Kutta

    def rk4(x, h, y, f):

    k1 = h * f(x, y)

    k2 = h * f(x + 0.5*h, y + 0.5*k1)

    k3 = h * f(x + 0.5*h, y + 0.5*k2)

    k 4 = h * f ( x + h , y + k 3 )return x + h, y + (k1 + 2*(k2 + k3) + k4)/6.0

    Here yn+1 is the RK4 approximation ofy(tn+1), and the next value (yn+1) is determined by the presentvalue (yn) plus the weighted average of four increments, where each increment is the product of the sizeof the interval, h, and an estimated slope specified by function f on the right-hand side of the differentialequation.

    k1 is the increment based on the slope at the beginning of the interval, using y, (Eulers method) ;

    k2 is the increment based on the slope at the midpoint of the interval, using y +12

    hk1 ;

    5

    https://en.wikipedia.org/wiki/Verlet_integrationhttps://en.wikipedia.org/wiki/Verlet_integrationhttps://en.wikipedia.org/wiki/Verlet_integration
  • 7/27/2019 Revision of Numerical Integration Schemes

    6/7

    k3 is again the increment based on the slope at the midpoint, but now using y +12

    hk2 ;

    k4 is the increment based on the slope at the end of the interval, using y + hk3.

    In averaging the four increments, greater weight is given to the increments at the midpoint. The weightsare chosen such that if f is independent of y, so that the differential equation is equivalent to a simpleintegral, then RK4 is Simpsons rule.

    The RK4 method is a fourth-order method, meaning that the error per step is on the order of O(h5),while the total accumulated error has order O(h4).

    Implicit

    Unfortunately, explicit RungeKutta methods are generally unsuitable for the solution of stiff equationsbecause their region of absolute stability is small. This issue is especially important in the solution of partialdifferential equations.

    The instability of explicit RungeKutta methods motivates the development of implicit methods. Animplicit RungeKutta method has the form

    yn+1 = yn +

    s

    i=1

    biki,

    where

    ki = hf

    tn + cih, yn +

    sj=1

    aijkj

    , i = 1, . . . , s .

    The consequence of this difference is that at every step, a system of algebraic equations has to be solved.This increases the computational cost considerably.

    Crank-Nicholson...

    ...combines two methods:

    Explicit...

    ...given by:1

    t

    un+1i u

    ni

    =

    D

    (x)2(uni+1 2u

    ni + u

    ni1) (6)

    Implicit...

    ...given by:1

    t

    un+1i u

    ni

    =

    D

    (x)2(un+1i+1 2u

    n+1i + u

    n+1i1 ) (7)

    Giving us:

    1

    t

    un+1i u

    ni

    =

    D

    2(x)2((uni+1 2u

    ni + u

    ni1) + (u

    n+1i+1 2u

    n+1i + u

    n+1i1 )) (8)

    In order to use Crank-Nicholson, a grid is needed.

    6

  • 7/27/2019 Revision of Numerical Integration Schemes

    7/7

    Error Propagation

    Well, to be quite frank, a brief review of the literature was incredibly boring and yielded only confusion.It seems no-one knows how to propagate errors with Runge-Kutta [2], and nobody uses Euler to make itworth mentioning again. Anyway, most of the error may seem to come from number rather than algorithmicimprecision, at least when using a decent algorithm with a reasonable step size.

    The solution provided by Jakob is comparing the relative change in a conserved quantity such as Ethroughout the calculation. Obviously, in order for the calculation to be useful, the relative errors have to bemuch less than one. If the precision of the calculation approaches the numerical capability of the computersystem, either get a better one or up the order of the algorithm. It seems most astro guys wouldnt daresteep lower than RU8.3

    Truncation Error Truncation errors in numerical integration are of two kinds:

    local truncation errors the error caused by one iteration, and

    global truncation errors the cumulative error caused by many iterations.

    Application: The Two-Body Problem

    Newtons EoM:

    r = GM

    r2r

    r(9)

    Decompose:

    r = b; v = GM

    r2r

    r

    The angular momentum vector j and the Runge-Lenz vector e are constant, ie dedt

    = 0 etc. Defining an anglef = 0 we get

    r(f) =j2/GM

    1 + e cos f(10)

    which is the well known solution of a conic section with the orbital plane.

    Transform to dimensionless variables s = r/R0, w = v/V0, V0 =

    GM/R0, = t/T0, T0 =R30/GM with R0 the inital separation. Since the solution is known to be an ellipsis around the coordinate

    centre, these equations can be used to test numerical integration:

    ds

    d= w;

    dw

    d=

    s

    s3(11)

    References

    [1] E. Hairer. Achieving brouwers law with implicit rungekutta methods. .

    [2] Spijker. Error Propagation in Runge-Kutta Methods. Applied Numerical Methematics, 1996.

    3I had a reference for this but I cant figure out the password for M adels Butze wifi. EDIT: Methods of arbitrarily highorder are available; for efficiency reasons it is important to use high order methods (order 8 and higher) for computations closeto machine accuracy. For quadruple precision a much higher order of the methods is recommended. [1]

    7

    http://dx.doi.org/10.1016/S0168-9274(96)00040-2http://dx.doi.org/10.1016/S0168-9274(96)00040-2http://dx.doi.org/10.1016/S0168-9274(96)00040-2