109
Western Kentucky University TopSCHOLAR® Masters eses & Specialist Projects Graduate School Fall 2017 Analysis and Implementation of Numerical Methods for Solving Ordinary Differential Equations Muhammad Sohel Rana Western Kentucky University, [email protected] Follow this and additional works at: hps://digitalcommons.wku.edu/theses Part of the Numerical Analysis and Computation Commons , Ordinary Differential Equations and Applied Dynamics Commons , and the Partial Differential Equations Commons is esis is brought to you for free and open access by TopSCHOLAR®. It has been accepted for inclusion in Masters eses & Specialist Projects by an authorized administrator of TopSCHOLAR®. For more information, please contact [email protected]. Recommended Citation Rana, Muhammad Sohel, "Analysis and Implementation of Numerical Methods for Solving Ordinary Differential Equations" (2017). Masters eses & Specialist Projects. Paper 2053. hps://digitalcommons.wku.edu/theses/2053

Analysis and Implementation of Numerical Methods for

  • Upload
    others

  • View
    2

  • Download
    3

Embed Size (px)

Citation preview

Page 1: Analysis and Implementation of Numerical Methods for

Western Kentucky UniversityTopSCHOLAR®

Masters Theses & Specialist Projects Graduate School

Fall 2017

Analysis and Implementation of NumericalMethods for Solving Ordinary DifferentialEquationsMuhammad Sohel RanaWestern Kentucky University, [email protected]

Follow this and additional works at: https://digitalcommons.wku.edu/theses

Part of the Numerical Analysis and Computation Commons, Ordinary Differential Equationsand Applied Dynamics Commons, and the Partial Differential Equations Commons

This Thesis is brought to you for free and open access by TopSCHOLAR®. It has been accepted for inclusion in Masters Theses & Specialist Projects byan authorized administrator of TopSCHOLAR®. For more information, please contact [email protected].

Recommended CitationRana, Muhammad Sohel, "Analysis and Implementation of Numerical Methods for Solving Ordinary Differential Equations" (2017).Masters Theses & Specialist Projects. Paper 2053.https://digitalcommons.wku.edu/theses/2053

Page 2: Analysis and Implementation of Numerical Methods for

ANALYSIS AND IMPLEMENTATION OF NUMERICAL METHODS FORSOLVING ORDINARY DIFFERENTIAL EQUATIONS

A ThesisPresented to

The Faculty of the Department of MathematicsWestern Kentucky UniversityBowling Green, Kentucky

In Partial FulfillmentOf the Requirements for the Degree

Master of Science

ByMuhammad Sohel Rana

December 2017

Page 3: Analysis and Implementation of Numerical Methods for
Page 4: Analysis and Implementation of Numerical Methods for

ACKNOWLEDGMENTS

At the very outset I would like to thank my advisor Dr. Mark Robinson to

give me a chance to work with him for my thesis and for his constant supervision

to finish my work. I am also grateful for his patience during the work. I especially

indebted to him for his support, help and guidance in the course of prepare this

paper. I also want to give special thanks to my other two thesis committee members

Dr. Ferhan Atici and Dr. Ngoc Nguyen for their cooperation towards me. It is also

worth mentioning all my supportive and inspiring friends for their inspiration during

the hard time. Finally, I would like mention my beloved parents and other family

members for their enthusiastic support and advice to move forward in life.

iii

Page 5: Analysis and Implementation of Numerical Methods for

CONTENTS

List of Figures vi

List of Tables ix

ABSTRACT x

Chapter 1. INTRODUCTION 1

1.1. Differential Equations and Initial Value Problems 1

1.2. Numerical Difference Methods 2

1.3. Order and Truncation Error of a Numerical Method 3

Chapter 2. Stiffness and Stability 13

2.1. Stiffness 13

2.2. Stability 16

Chapter 3. Some Special Problems and Their Numerical Solutions 22

3.1. Nonhomogeneous Differential Equations 22

3.2. Logistic Differential Equation 27

3.3. Systems of Differential Equations 32

3.4. Predator-Prey Problem 34

3.5. Harmonic Oscillator 40

3.6. Conditions Under Which Newton Iteration Converges When Applied

for Backward Euler or Trapezoidal Method 43

Chapter 4. Discretization of Partial Differential Equations 49

4.1. Difference Formulas and Other Preliminaries 49

iv

Page 6: Analysis and Implementation of Numerical Methods for

4.2. Stiff Differential Systems in Some Applications: 52

4.3. Nonhomogeneous Heat Equation 56

Chapter 5. Experimental Estimation of the Order of Numerical Methods 60

5.1. Error Analysis 60

5.2. Handling Order in Practice 66

Chapter 6. Numerical Approximation for Second Order Singular Differential

Equations 72

6.1. Lane-Emden Equation and Series Solutions 72

6.2. Numerical Results for Second Order Singular Differential Equations 75

Chapter 7. Conclusion and Future Work 84

Appendices 87

BIBLIOGRAPHY 97

v

Page 7: Analysis and Implementation of Numerical Methods for

List of Figures

2.2.1 Stability region of Euler’s method (non shaded region) 19

2.2.2 Stability region of backward Euler method (non shaded region) 19

2.2.3 Stability region of Trapezoidal method (non shaded region) 20

2.2.4 Stability region of modified Euler method (non shaded region) 20

3.1.1 Graph of exact solution of y′ = −100(y − et) with y(0) = 0. 25

3.1.2 Graph of the solution using Trapezoidal method (oscillating curve) with

step size h = 0.1 together with exact solution of y′ = −100(y − et) with

y(0) = 0. 25

3.1.3 Graph of exact solution (dark curve) and solution using backward Euler

method (light curve) for h = 0.1 of y′ = −100(y − et) with y(0) = 0. 26

3.1.4 Approximate solution for Example 3.1.1 using backward Euler method

(smoother curve) and Trapezoidal method (oscillating curve) for step size h

= 0.1. 26

3.2.1 Figure shows no stiffness for λ = 1. 29

3.2.2 Figure shows that the differential equation starts to become stiff for

λ = 5. 29

3.2.3 Figure shows that the stiffness increased for the differential equation

when λ = 15. 29

3.2.4 Figure shows that the differential equation is very stiff when λ = 50. 30

vi

Page 8: Analysis and Implementation of Numerical Methods for

3.2.5 Figure of results from the use of Trapezoidal method for numerical

solution using λ = 1, 5, 15, 50 for h = 0.25. Gray level decreases as the value

of the λ increases. 30

3.2.6 Figure of results of the logistic equation for different values of

λ = 1, 5, 15, 50 using StiffnessSwitching command in Mathematica. Gray

level decreases as the value of λ increases. 30

3.2.7 Figure of step size h versus t using StiffnessSwitching for logistic equation

for λ = 1 (dashed curve) and λ = 50 (lined curve). 31

3.4.1 Using explicit Euler method for Lotka-Volterra problem (3.9) with step

size h = 0.001 dark graph for predator and light one is for prey for initial

conditions x1(0) = 7940

and x2(0) = 1 37

3.4.2 Phase space plot where horizontal axis represents number of prey and

vertical axis represents numbers of predator for the problem (3.9) using

implicit solution of the differential equation by Mathematica ContourPlot 38

3.4.3 Numerical solution using backward Euler method where horizontal-axis

for prey and vertical-axis for predator with step size h = 0.001. 38

3.4.4 Using implicit Trapezoidal method(using the codes developed for system

of two differential equations) the plot of the numeric discrete solution for

Lotka-Volterra problem (3.9) with step size h = 0.05 for initial solution

x1(0) = 7940

and x2(0) = 1 39

3.4.5 Using implicit Trapezoidal method(using the codes developed for system

of two differential equations) the plot of the numeric discrete solution for

Lotka-Volterra equation (3.9) with step size h = 0.05 for initial conditions

x1(0) = 7940

and x2(0) = 1 39

3.5.1 Plot of exact solution y(t) = cos(wt) of the second order differential

equation (3.10) for ω = 20. 44vii

Page 9: Analysis and Implementation of Numerical Methods for

3.5.2 Plot of numeric solution of the converted system of equations of second

order differential equation (3.10) using Trapezoidal method for system of

two equations for ω = 20 and step size h = 0.005. 44

3.5.3 Combined plot of exact solution and numeric solution using trapezoidal

method of the second order ODE (3.10) for ω = 20 and step size h = 0.005. 44

3.5.4 Plot of exact solution and numeric solution using trapezoidal method of

the second order ODE (3.10) for ω = 20 and step size h = 0.035. 45

4.2.1 Figure of circles center at − 2h2, ( 2b0

a0h− 2

h2) and (− 2b1

a1h− 2

h2) on the real

axis having radius 2h2

for h = 1, b0 = 12, a0 = 4, a1 = 8 and b1 = 1. 56

5.1.1 Plot of claimed E(h) versus h for backward Euler method, implicit

trapezoidal method, and Runge-Kutta method. 64

5.1.2 Plot of acctual error, E(h) versus h for backward Euler method, implicit

trapezoidal method, and Runge-Kutta method for the equation in Example

5.1.1. 64

5.2.1 Plot of E(h) versus h when computational finiteness contributes to the

error of the methods. 71

viii

Page 10: Analysis and Implementation of Numerical Methods for

List of Tables

5.1.1 Approximation wh(b) and actual Error |E(h)| = |y(b) − wh(b)| for

y′(t) = 5e5t(y − t)2 + 1 63

5.1.2 Order of Error p = ln|E(2h)|−ln|E(h)|ln2

for y′(t) = 5e5t(y − t)2 + 1 66

5.2.1 Order p = ln|D4h|−ln|D2h|ln2

for y′(t) = t− y2 70

6.2.1 Order of Error p = ln|E(2h)|−ln|E(h)|ln2

for y′′(t) + 2ty′(t) + y = 0, t ∈

(0, 1] with y(0) = 1, y′(0) = 0 and exact solution y(t) = sin tt

for Nystrom

method. 81

6.2.2 Comparison of the backward Euler method and Nystrom method with

Beech’s approximation for Lane-Emden equation for n = 3 and h = 0.025. 82

6.2.3 Comparison of backward Euler method and Nystrom method with Fowler

and Hoyle’s approximation for Lane-Emden equation for n = 1.5 and

h = 0.025 . 83

ix

Page 11: Analysis and Implementation of Numerical Methods for

ANALYSIS AND IMPLEMENTATION OF NUMERICAL METHODS FOR

SOLVING ORDINARY DIFFERENTIAL EQUATIONS

Muhammad Sohel Rana December 2017 98 Pages

Directed by: Dr. Mark Robinson, Dr. Ferhan Atici and Dr. Ngoc Nguyen

Department of Mathematics Western Kentucky University

Numerical methods to solve initial value problems of differential equations pro-

gressed quite a bit in the last century. We give a brief summary of how useful

numerical methods are for ordinary differential equations of first and higher order.

In this thesis both computational and theoretical discussion of the application of

numerical methods on differential equations takes place. The thesis consists of an

investigation of various categories of numerical methods for the solution of ordi-

nary differential equations including the numerical solution of ordinary differential

equations from a number of practical fields such as equations arising in population

dynamics and astrophysics. It includes discussion what are the advantages and dis-

advantages of implicit methods over explicit methods, the accuracy and stability of

methods and how the order of various methods can be approximated numerically.

Also, semidiscretization of some partial differential equations and stiff systems which

may arise from these semidiscretizations are examined.

x

Page 12: Analysis and Implementation of Numerical Methods for

CHAPTER 1

INTRODUCTION

1.1. Differential Equations and Initial Value Problems

Differential equations are very useful in science and engineering as model prob-

lems. They are ubiquitous in fields like economics, biology, business, health science,

social science as well. Mathematicians have developed many methods to solve differ-

ential equations. To solve a differential equation numerically we require a differential

equation with an initial condition or boundary conditions and consequently the dif-

ferential equation with initial condition is called an initial value problem (IVP) and

one with boundary conditions is called a boundary value problem (BVP).

In this paper we will describe some methods to solve an initial value problem and

compare them depending on many criteria.

Example 1.1.1. Consider the simple first order problem

dy

dt= f(t, y), a ≤ t ≤ b, y(a) = α, (1.1)

where dydt

means the rate of change in y with respect to time t and f(t, y) is a function

of time t and y. Sometimes we will write dydt

as y′ . which is called the derivative of

y with respect to time t. This is an initial value problem and the solution y(t) of

this problem satisfies the initial condition y(a) = α.

1

Page 13: Analysis and Implementation of Numerical Methods for

1.2. Numerical Difference Methods

There are many difference methods to solve an initial value problem. All these

methods are divided into two categories: 1. Explicit method, 2. Implicit

method. A method is an explicit method if the current approximation depends

on the previously determined approximate values. But when the method involves

current approximation on the both sides of the equation it is called an implicit

method. Euler’s method is the most elementary method to solve a problem. The

objective of Euler’s method is to obtain an approximate solution of the IVP (1.1). By

using Euler’s method we cannot obtain a continuous approximation to the solution;

instead, an approximation to the solution y will be generated at various values, called

mesh points, in the interval [a, b]. Generally, mesh points are equally distributed

throughout the interval [a, b]. Often in practice variable step-size methods are used.

By choosing a positive integer N we get mesh points

tn = a+ nh, for each n = 0, 1, 2, ..., N.

where, h = (b − a)/N = tn+1 − tn is called the step size. Now to derive Euler’s

method we will use Taylor’s Theorem. Let us suppose that y(t), the unique solution

of (1.1), is continuously twice differentiable on [a, b], so that for each n = 0,1,2,...,N -

1,

y(tn+1) = y(tn) + (tn+1 − tn)y′(tn) +

(tn+1 − tn)2

2y′′(ξn),

2

Page 14: Analysis and Implementation of Numerical Methods for

for tn < ξn < tn+1. Since h = tn+1 − tn, we can write

y(tn+1) = y(tn) + hy′(tn) +

h2

2y′′(ξn),

As dydt

= f(t, y) we obtain from the above equation

y(tn+1) = y(tn) + hf(tn, yn) +h2

2y′′(ξn). (1.2)

Now considering wn, approximate solution to the exact solution y(tn) i.e. wn ≈ y(tn),

for n = 0,1,2,...,N −1, and by deleting the remainder term we get Euler’s method

[7] as follows:

w0 = α, wn+1 = wn + hf(tn, wn), n = 0, 1, 2, ..., N − 1. (1.3)

Euler’s method is an explicit method.

1.3. Order and Truncation Error of a Numerical Method

Numerical methods to solve a differential equation or a system of differential

equations involve approximation of the solution. So, an obvious part of the numerical

solution is truncation error. There are two types of error, one is local truncation

error and another one is global truncation error. The local truncation error

involves specific step in the process of solution. It measures the amount at a specific

step by which the exact solution to the differential equation fails to satisfy the

difference equation defining the numerical method. The global error is actually the

accumulation of the error of all steps until current step.3

Page 15: Analysis and Implementation of Numerical Methods for

Definition. The difference method

w0 = α

wn+1 = wn + hφ(tn, wn, h)

for each i = 0, 1, . . . , N − 1, has the local truncation error [7]

τn+1(h) =y(tn+1)− (y(tn) + hφ(tn, y(tn), h))

h=y(tn+1)− y(tn)

h− φ(tn, y(tn), h),

for each n = 0, 1, . . . , N − 1, where y(tn) and y(tn+1) denote the solution at tn and

tn+1 respectively. For Euler’s method from the equation (1.2), we have

y(tn+1) = y(tn) + hy′(tn) +

h2

2y′′(ξn),

So, the local truncation error for the Euler’s method in (n+1)th step is

τn+1(h) =h

2y′′(ξn),

for some ξn ∈ (tn, tn+1). If∣∣y′′(t)∣∣ is bounded on [a, b] by some constant M > 0 then

we get

|τn+1(h)| ≤ h

2M

which means that the local truncation error for Euler’s method is O(h).

Now let y(tn) denote the exact value of the solution of the differential equation

and wn be the approximation obtained by the difference method at nth step then

4

Page 16: Analysis and Implementation of Numerical Methods for

the global error is defined as

En(h) = |wn − y(tn)| .

A method has order p if the local truncation of the method is O(hp). So, the order

of Euler’s method is 1. If we use the above definition for local truncation error then

the global truncation error has the same order as local truncation error.

1.3.1. Backward Euler Method. [7] Backward Euler method is an implicit

method. For IVP (1.1) Backward Euler method is defined by

w0 = α,wn+1 = wn + hf(tn+1, wn+1), n = 0, 1, 2, ..., N − 1. (1.4)

This method is also called the linearly implicit Euler method.

1.3.2. Midpoint method. [7] The midpoint method tries to improve the ap-

proximation obtained using Euler’s method. Euler’s method uses the derivative at

the current solution point (t0, u0) to create a linear prediction of the solution at the

next point (t1, u1). This is in practice to draw curve of the function using a sequence

of line segments. It can never be exact to the original solution but we can always

improve by using smaller line segment at a time. For the midpoint method we

actually use Euler’s method but instead of using full step size the midpoint method

uses a half step and sampling the derivatives there and uses this information as slope

for full step size to get a better approximation at the same point.5

Page 17: Analysis and Implementation of Numerical Methods for

The midpoint method has the form as follows:

w0 = α, wn+1 = wn + hf(tn +h

2, wn +

h

2f(tn, wn)) (1.5)

for n = 0, 1, 2, ..., N − 1.

1.3.3. Implicit Trapezoidal method. The following method is called Im-

plicit Trapezoidal method [7]

w0 = α,

wn+1 = wn +h

2[f(tn, wn) + f(tn+1, wn+1)]

(1.6)

where, 0 ≤ n ≤ N − 1. It gets its name from the trapezoidal rule for numerical

integration.

1.3.4. Modified Euler’s Method. The modified Euler’s method is the com-

bination of the Euler’s method and Trapezoidal method: [7]

wn+1 = wn + hf(tn, wn)

wn+1 = wn +h

2(f(tn, wn) + f(tn+1, wn+1))

(1.7)

We can write (1.7) which is a combination of two methods as a single method, which

is equivalently,

wn+1 = wn +h

2[f(tn, wn) + f(tn + h,wn + hf(tn, wn))] (1.8)

This is also called explicit trapezoidal method. Since in the second equation of

(1.7) wn+1 is not known, we use first equation of system (1.7) to estimate wn+1 and6

Page 18: Analysis and Implementation of Numerical Methods for

use this approximation in the second equation. So, it takes the form as follows:

w∗ = wn + hf(tn, wn)

wn+1 = wn +h

2(f(tn, wn) + f(tn+1, w

∗))

(1.9)

This method is called a predictor corrector method. [7] First equation is called

predictor and second equation is called corrector. This method is also called a

Runge-Kutta method of 2nd order.

1.3.5. Runge-Kutta Methods. Family of Runge-Kutta methods are very im-

portant methods for approximate solutions of ordinary differential equations (ODE).

German mathematician C. Runge (1856-1927) and M.W. Kutta (1867-1944) devel-

oped Runge-Kutta methods in 1900. [9]

Consider the following ODE

dy

dt= f(t, y), t ∈ [a, b]. (1.10)

Then a Runge-Kutta method is actually a sum of two tasks.

Task 1: Computation of s stage values:

Wi = wn + h

s∑j=1

aijf(tn + cjh,Wj) (1.11)

for i=1,2,3,...,s.

Task 2: Computation of the solution at tn :

wn+1 = wn + h

s∑i=1

bif(tn + cih,Wi). (1.12)

7

Page 19: Analysis and Implementation of Numerical Methods for

Each of the these two tasks carried out to calculate the solution at t1, t2,...

Butcher’s Tableau: [8] The Butcher’s tableau in terms of coefficients aij, bi,

and ci for Runge-Kutta method has the form:

c1 a11 a12 ... a1s

c2 a21 a22 ... a2s

. . . . .

. . . . .

. . . . .

cs as1 as2 ... ass

b1 b2 ... bs

=c A

bT(1.13)

where all the properties of Runge-Kutta methods depend on the entries A, b, c in

the above tableau.

Example 1.3.1. The classical fourth order Runge-Kutta method is given by

w0 = α

k1 = hf(tn, wn),

k2 = hf(tn +h

2, wn +

1

2k1),

k3 = hf(tn +h

2, wn +

1

2k2),

k4 = hf(tn+1, wn + k3),

wn+1 = wn +1

6(k1 + 2k2 + 2k3 + k4),

for each n = 0,1,...,N-1. This method has truncation error O(h4), provided the

solution y(t) is continuously five times differentiable.8

Page 20: Analysis and Implementation of Numerical Methods for

The tableau form of the fourth order Runge-Kutta method is as follows:

0 0 0 0 0

12

12

0 0 0

12

0 12

0 0

1 0 0 1 0

16

13

13

16

For Modified Euler method

wn+1 = wn +h

2[f(tn, wn) + f(tn + h,wn + hf(tn, wn))]

which has the equivalent form

k1 = hf(tn, wn)

k2 = hf(tn + h,wn + k1)

wn+1 = wn +1

2k1 +

1

2k2.

So, the tableau form of the above method is:

0 0 0

1 1 0

12

12

1.3.6. Backward Differentiation Formula (BDF) Method. [2] A BDF

method is a multi-step method and a general linear s-step method for a differential

equation has the following form

s∑i=0

αiwn+1−i = h

s∑i=0

βifn+1−i (1.14)

9

Page 21: Analysis and Implementation of Numerical Methods for

where, n = 0, 1, 2, . . . and fn+1−i = f(tn+1−i, wn+1−i) and αi, βi are coefficients. We

notice here that if wn+1 appears only on the left hand side then the method is said

to be explicit and we see that if β0 = 0 then the method is explicit. For a BDF

method β0 6= 0 but all the other coefficients βi = 0 for i = 1, 2, 3, . . . s. Now for s=1,

we have β0 = 1 and α0 = 1, α1 = −1 (all these values given in [2]) and we obtain

the backward Euler method. For s= 1, and fn+1 = f(tn+1, wn+1) equation (1.14)

becomes

wn+1 − wn = hf(tn+1, wn+1)

⇒ wn+1 = wn + hf(tn+1, wn+1)

which is backward Euler method. From [2], for s=2, we have β0 = 23and α0 = 1,

α1 = −43and α2 = 1

3, then we obtain two-step BDF method as follows:

wn+1 −4

3wn +

1

3wn−1 =

2

3hf(tn+1, wn+1).

In the same fashion we can obtain higher order BDF methods. We want to note

here that the higher the value of s higher the order of the method(as discussed in

[2]).

1.3.7. AdamsMethods. Consider the equations (1.14), for the family of Adams

methods we set

α0 = 1, α1 = −1 and αj = 0,∀j > 1

10

Page 22: Analysis and Implementation of Numerical Methods for

Now, we categorize the family of Adams method into two groups i) Adams Bashforth

methods and ii) Adams Moulton methods. All the s-step explicit Adams methods

are Adams Bashforth methods and all s-step implicit Adams methods are called

Adams Moulton methods. Therefore, for Adams Bashforth methods β0 = 0 which

gives the family of explicit methods.

As Adams Bashforth methods are explict methods we interpolate f through all

the previous points t = tn, tn−1, tn−2, . . . , tn+1−s with the accuracy of order, p = s.

For s = 1 we obtain

wn+1 = wn + hf(tn, wn)

which is explicit or forward Euler method.

Now, for the family of implicit Adams methods we interpolate the function f

through all the previous points plus the next iteration point, t = tn+1, tn, tn−1, tn−2, . . . ,

tn+1−s and obtain a method with the accuracy of order, p = s + 1. For s = 1, we

obtain two methods depending on βj, if β0 = 1 and β1 = 0 then it gives us backward

Euler method. i.e.

wn+1 = wn + hf(tn+1, wn+1).

But β0 = β1 = 12gives implicit trapezoidal method. i.e.

wn+1 = wn +h

2(f(tn, wn) + f(tn+1, wn+1).

We can obtain higher order Adams methods (both Bashforth and Moulton) using

the coefficients of βj for j = 0, 1, 2, . . . , 6.(see [2])

11

Page 23: Analysis and Implementation of Numerical Methods for

1.3.8. The Modified Midpoint Method. The modified midpoint method is

a multi-step explicit method. It is also called leap frog method.[2] The method has

the following form

wn+1 = wn−1 + 2hf(tn, wn). (1.15)

12

Page 24: Analysis and Implementation of Numerical Methods for

CHAPTER 2

Stiffness and Stability

2.1. Stiffness

Stiff Equation: A differential equation of the form y′

= f(t, y) is said to be

stiff if its exact solution y(t) includes a term that decays exponentially to zero as t

increases, but whose derivatives are much greater in magnitude than the term itself.

As for example, a term e−at, where a is a large positive constant. In a stiff equation

all these exponential terms lead to rapid changes to the solution.[7]

Example 2.1.1. Consider the following initial value problem

y′(t) = −30y(t), t ≥ 0, y(0) = 1.

The exact solution of the above IVP is y(t) = e−30t with y(t) → 0 as t → ∞.

Therefore, it is a stiff equation.

If we use Euler’s method to solve this problem, with step size h = 0.1, then we

have

wn+1 = wn − 30hwn = −2wn = (−2)n

which is an exponentially growing solution of the equation. But if we use step size

h = 10−2, we obtain the solution yn = (0.7)n, which is much more accurate solution

and behaves almost like the exact solution of the problem. So, smaller step size gives13

Page 25: Analysis and Implementation of Numerical Methods for

more accurate result for stiff problems. We will discuss more about stiff problem

later.

2.1.1. Stiffness Ratio: Let us consider non-homogeneous system of linear con-

stant coefficient differential equations

y′(t) = Ay(t) + f(t)

where y(t), f(t) ∈ Rn and A is a constant n × n matrix. Let λi ∈ C be the

eigenvalues of A and vi be the corresponding eigenvectors for i = 1, 2, 3, . . . , n. If

all the eigenvalues are distinct then the general solution of the system is as follows:

y(t) =n∑i=1

cieλitvi + g(t)

where, ci are arbitrary constants and g(t) is particular solution. We consider that

the real part of all eigenvalues are negative. i.e. Re(λi) < 0, i = 1, 2, ..., n,.

Since Re(λi) < 0, eλitvi → 0 as t → ∞ and y(t) → g(t). Now if the imaginary

part of the eigenvalues are zero then y(t) → g(t) monotonically otherwise it is

sinusoidally.∑n

i=1 cieλitvi and g(t) are called the transient solution and steady-

state solution of the system. Depending on the magnitude of the real part of

the eigenvalues, λi the transient solution may decay slowly or very rapidly. For

large |Re(λi)| the transient solution decays very fast and is called fast transient and

otherwise it is slow transient. Now let us define λ, λ ∈ σ(A), where σ(A) denotes

the spectrum (set of eigenvalues) of A by

∣∣Re(λ)∣∣ ≥ |Re(λi)| ≥ |Re(λ)| for i = 1, 2, ..., n

14

Page 26: Analysis and Implementation of Numerical Methods for

then using the λ and λ we define stiffness ratio as∣∣Re(λ)∣∣

|Re(λ)|

and we say a linear constant coefficient system is stiff if the stiffness ratio is large.[14]

When we are given a nonlinear system of ODEs, y′ = f(t,y) then the way of

getting the eigenvalues is not the same as described above. In that case we find the

Jacobian(J), ∂f∂y, for the system and find the eigenvalues of the obtained Jacobian,

J =[∂fi∂yj

].

Now the methods we will be using to solve the differential equation, some of

them are good for nonstiff problems and some are specialized for stiff problems.

Generally, explicit methods work for nonstiff problems with less effort but for stiff

problems we need implicit methods. Normally if we use explicit methods for stiff

problems we need to use very small step size. Whereas, for non-stiff problems, we

are allowed a reasonably bigger step size even for explicit methods. Consequently,

if we use a method with small step size then the cost is high as well. To get rid of

this problem we use the combination of two methods and one of these two methods

is specialized for stiff problems and other method works for nonstiff problems with

less expense. If a problem is stiff then the part of the interval of the problem where

it is stiff the dedicated specialized method will be applied for that part and for the

15

Page 27: Analysis and Implementation of Numerical Methods for

nonstiff part the other method will be applied. Since the step size is not uniform

throughout the interval of the problem therefore we say the method has variable

step size. The process of the work of these two methods is when it starts working

if the problem on the current position of the interval is nonstiff then the method

which is good for nonstiff problems with relatively bigger step size will be used and

when the stiffness will occur it switches to the method which is specialized for the

stiff problems. Normally, explicit methods are used for nonstiff part and for the

stiff part we use implicit methods. To solve a stiff problem for our study we will

use Mathematica[18] and its “StiffnessSwitching”[18] command. In fact, we will

use NDSolve feature of Mathematica. The “StiffnessSwitching” command in Math-

ematica automatically switches between a nonstiff and stiff solver. In Mathematica

“StiffnessSwitching” method uses a pair of extrapolation methods as default. For

nonstiff solver it uses “ExplicitModifiedMidpoint” method (Equation 1.15) and the

stiff solver uses “LinearlyImplicitEuler” method (Equation 1.4). We want to empha-

size that the “StiffnessSwitching” uses the variable step size and switches back and

forth from nonstiff solver to stiff solver. We will use “StiffnessSwitching” command

later for different types of problems.

2.2. Stability

Consider the one step Euler’s method

w0 = α, wn+1 = wn + f(tn, wn), n = 1, 2, ..., N − 1 (2.1)16

Page 28: Analysis and Implementation of Numerical Methods for

and simple test equation

y′ = λy, y(0) = α, where λ < 0

Using Euler’s method for this equation, we obtain

w0 = α, wn+1 = wn + h(λwn) = (1 + hλ)wn

which implies

wn+1 = (1 + hλ)n+1w0 = (1 + hλ)n+1α.

In general for all one step difference methods when we apply the test equation a

function Q exists, which gives

wn+1 = Q(hλ)wn.

For z = hλ we write P (z) = Q(hλ), which is called stability function of the

method. [7]

Consider the test equation

dy

dt= λy(t) (2.2)

then using this test equation in Euler’s method (2.1) we have,

wn+1 = wn + hλwn

⇒ wn+1 = (1 + hλ)wn

⇒ wn+1 = Q(hλ)wn.

17

Page 29: Analysis and Implementation of Numerical Methods for

Therefore, the stability function for Euler’s method is P (z) = 1 + z. For backward

Euler method using test equation (2.2) in (1.4) we get

wn+1 = wn + hλwn+1

⇒ wn+1(1− hλ) = wn

⇒ wn+1 =1

1− hλwn.

So, P (z) = 11−z is the stability function for backward Euler method.

Definition. The region R of absolute stability for a single step method is R= {hλ ∈

C| |Q(hλ)| < 1}

Example 2.2.1. In the following figures we observes that the absolute stability

region of Euler’s method is a circle of radius 1 centered at (-1,0) [Figure 2.2.1], but

backward Euler method is the opposite, the stability region for this method is outer

region of circle of radius 1 centered at (1,0) [Figure 2.2.2]. We notice in these two

figures that in Figure 2.2.1 the center of the circle marked by a rectangular shape

mark whereas in Figure 2.2.2 for the backward Euler method it has been indicated

by a diamond figure shape. The rectangular shape represents the zero of the sta-

bility function P (z) and the diamond shape represents the pole of P (z). Note that

graphs of stability regions here and subsequently are obtained using Mathematica’s

OrderStarPlot command. In the next part of this section we will discuss A-stability

of numerical difference methods and we use the stability region to define A-stability.

Definition. A numerical difference method is A-stable if its absolute stability re-

gion R contains entire left half plane in complex plane.[7]18

Page 30: Analysis and Implementation of Numerical Methods for

Figure 2.2.1. Stability region of Euler’s method (non shaded region)

Figure 2.2.2. Stability region of backward Euler method (nonshaded region)

In the above mentioned figures [Figure 2.2.1 and Figure 2.2.2] we see that the

backward Euler method is A-stable but Euler’s method is not an A-stable method.

The Implicit Trapezoidal method, given by

w0 = α,

wn+1 = wn +h

2[f(tn, wn) + f(tn+1, wn+1)]

(2.3)

where, 0 ≤ n ≤ N−1, is an A-stable method and this is the only A-stable multistep

method. In Figure 2.2.3 entire left half plane(nonshaded) shows absolute stability

region for Trapezoidal method. No explicit difference method is A-stable. No multi-

step implicit difference method of order higher than 2 is A-stable.19

Page 31: Analysis and Implementation of Numerical Methods for

Figure 2.2.3. Stability region of Trapezoidal method (non shadedregion)

Definition. [2]A numerical method is called L-stable if

limRe(z)→−∞

P (z) = 0.

Figure 2.2.4. Stability region of modified Euler method (nonshaded region)

Example 2.2.2. For forward Euler method (1.3)(Euler’s method) P(z) = (1 +

z), backward Euler method (1.4) P (z) = 11−z , and for Trapezoidal method (2.3) P (z)

= 1+ z2

1− z2and it easy to verify that P (−∞) = 0 for backward Euler method whereas

P (−∞) = −∞ for forward Euler method and P (−∞) = −1 for Trapezoidal method20

Page 32: Analysis and Implementation of Numerical Methods for

and so the backward method is an L-stable method and forward Euler method and

Trapezoidal method are not L-stable methods.

Example 2.2.3. For modified Euler method (1.7) P (z) = 1 + z + 12z2 and so

modified Euler method is not L-stable as P(−∞) = ∞. From Figure 2.2.4 we can

easily conclude that the absolute stability region does not contain the entire left half

plane which implies the modified Euler method is not an A-stable method.

21

Page 33: Analysis and Implementation of Numerical Methods for

CHAPTER 3

Some Special Problems and Their Numerical Solutions

3.1. Nonhomogeneous Differential Equations

The following theorem is similar to a result stated in "Numerical methods for

Evolutionary Differential Equations" by Uri. M. Ascher. [2]

Theorem 3.1.1. Let us consider nonhomogeneous test equation of the form

y′= λ(y − g(t)), 0 < t < b, (3.1)

where the function g(t) is bounded and g ∈ C1[0, b]. As Re(λ) → −∞, the exact

solution of (3.1) satisfies

y(t)→ g(t), 0 < t < b

regardless the initial value of y0.

Proof. From equation (3.1), we are given nonhomogeneous equation dydt

= λ(y−

g(t)). Therefore,

dy

dt− λy = −λg(t)

The complementary solution of the equation is

yc(t) = Ceλt

22

Page 34: Analysis and Implementation of Numerical Methods for

Let us suppose that yp = u1(t)y1(t) is a particular solution of the differential equa-

tion, where, y1(t) = eλt. Then

y′

p = u1(t)y′

1(t) + u′

1(t)y1(t)

Using y′p in (3.1) we get,

u′

1(t)y1(t) + u1(t)y′

1(t)− λu1(t)y1(t) = −λg(t)

⇒ u′

1(t)y1(t) + λeλtu1(t)− λu1(t)y1(t) = −λg(t)

⇒ u′

1(t)y1(t) = −λg(t)

⇒ u′

1(t) = − λg(t)

y1(t)= − λg(t)

eλt

For t ∈ (0, b) using fundamental theorem of calculus we can write,

u1(t)− u1(0) =

∫ t

0

u1′(s)ds

=

∫ t

0

−λg(s)

esλds

= −λ∫ t

0

g(s)e−sλds

= e−λsg(s)∣∣t0−∫ t

0

g′(s)e−sλds

= e−λtg(t)− g(0)−∫ t

0

g′(s)e−sλds

23

Page 35: Analysis and Implementation of Numerical Methods for

For some c in the interval [0, b], using the weighted mean value theorem for integrals

in the last term we get,

∫ t

0

g′(s)e−sλds = g

′(c)

∫ b

0

e−λsds

Therefore,

u1(t)− u1(0) = e−λtg(t)− g(0)− g′(c)∫ b

0

e−λsds

= e−λtg(t)− g(0)− g′(c) e−λs

−λ

∣∣∣∣t0

= e−λtg(t)− g(0) +g′(c)(e−λt − 1)

λ

The general solution of the differential equation is

y(t) = yc(t) + yp(t)

= Ceλt + eλt(e−λtg(t)− g(0) +g′(c)(e−λt − 1)

λ+ u1(0))

= Ceλt + g(t)− eλtg(0) +g′(c)

λ− eλtg′(c)

λ+ u1(0)eλt

As Re(λ)→ −∞, eλt → 0 which implies,

y(t)→ g(t).

This completes the proof. �

Definition. [2] We say a method has stiff decay if for 0 < tn < b fixed

|wn − g(tn)| → 0 as hRe(λ)→ −∞ (3.2)

24

Page 36: Analysis and Implementation of Numerical Methods for

Example 3.1.1. Consider the IVP

dy

dt= −100(y − et) y(0) = 0 (3.3)

The exact solution of the IVP is y(t) = 100101e−100t(−1 + e101t). As value of λ = -100

(large), the exact solution of the differential equation y(t) ≈ 100101et ≈ et as t increases.

Figure 3.1.1, Figure 3.1.2 and Figure 3.1.3 are for exact solution, solution obtained

using Trapezoidal method together with exact solution and solution obtained using

backward Euler method together with exact solution respectively. Figure 3.1.4 shows

solutions obtained by Trapezoidal method and backward Euler method.

0.5 1.0 1.5 2.0

2

4

6

Figure 3.1.1. Graph of exact solution of y′ = −100(y − et) withy(0) = 0.

0.5 1.0 1.5 2.0

2

4

6

Figure 3.1.2. Graph of the solution using Trapezoidal method(oscillating curve) with step size h = 0.1 together with exact solutionof y′ = −100(y − et) with y(0) = 0.

25

Page 37: Analysis and Implementation of Numerical Methods for

0.5 1.0 1.5 2.0

2

4

6

Figure 3.1.3. Graph of exact solution (dark curve) and solu-tion using backward Euler method (light curve) for h = 0.1 ofy′ = −100(y − et) with y(0) = 0.

0.5 1.0 1.5 2.0

1

2

3

4

5

6

7

Figure 3.1.4. Approximate solution for Example 3.1.1 using back-ward Euler method (smoother curve) and Trapezoidal method (oscil-lating curve) for step size h = 0.1.

We know that the Trapezoidal method is A-stable but not an L-stable method.

From the Figure 3.1.2 we observe that solution from Trapezoidal method started

with oscillation and consequently we can say that the trapezoidal method does not

work well for this problem. But comparing the exact solution and numerical solution

using backward Euler method in Figure 3.1.3 we see that it does work quite well

for equation (3.3). Backward Euler method is both A-stable and L-stable. We

often say that L-stable method does have the stiff decay and A-stable method does

not necessarily have stiff decay. This is the important difference between the two

methods. Therefore, backward Euler method is very suitable for very stiff problems.26

Page 38: Analysis and Implementation of Numerical Methods for

3.2. Logistic Differential Equation

Consider the logistic differential equation

dy

dt= λy(1− y) y(0) = α, λ > 0 (3.4)

where y(t) describes the quantity of a population at time t. The parameter λ is

called growth parameter. The equation (3.4) is separable , so we can use separation

of variables method to solve it.

dy

dt= λy(1− y)

⇒ dy

y(1− y)= λdt

⇒dy

y+

dy

1− y= λdt

⇒ ln(y)− ln(1− y) = λt+ C1

⇒ ln(y

1− y) = λt+ C1

⇒ y

1− y= Ceλt [put C = eC1 ]

⇒y = Ceλt(1− y)

⇒y + Cyeλt = Ceλt

⇒y =Ceλt

1 + Ceλt

27

Page 39: Analysis and Implementation of Numerical Methods for

Using the initial condition, y(0) = α, we get

α =C

1 + C

⇒ α(1 + C) = C

⇒ α = C(1− α)

⇒ C =α

1− α

Therefore, the solution of the differential equation is

y =α

α + (1− α)e−λt(3.5)

Here, y(0) = α is the initial population. It is obvious from the solution that the

stiffness of the equation depends on λ. For any λ > 0, the solution y(t) → 1 as

t → ∞. But it happens a lot more rapidly if λ is large. Even for small changes in

t, y increases so sharply and approaches to 1. Hence, for large value of λ the logistic

equation is considered as a stiff equation.

Example 3.2.1. dydt

= λy(1 − y), y(0) = 0.5 and the solution of the equation

is y(t) = eλt

1+eλt. Now, using λ = 1, 5, 15 and 50 respectively we observe that the

equation becomes more stiff as the value of λ increases. For λ = 1, Figure 3.2.1

shows there is no stiffness at all. But when λ = 5 the differential equation become

more (Figure 3.2.2). When we take λ = 15 Figure 3.2.3 shows that the differential

equation is stiff and for λ = 50 the equation is very stiff [Figure 3.2.4]. Which does

tell us that the stiffness of the problems depends on λ and the larger the λ stiffer

the problem.28

Page 40: Analysis and Implementation of Numerical Methods for

1 2 3 4 5t

0.5

0.6

0.7

0.8

0.9

1.0

y

Figure 3.2.1. Figure shows no stiffness for λ = 1.

1 2 3 4 5t

0.5

0.6

0.7

0.8

0.9

1.0

y

Figure 3.2.2. Figure shows that the differential equation starts tobecome stiff for λ = 5.

1 2 3 4 5t

0.5

0.6

0.7

0.8

0.9

1.0

y

Figure 3.2.3. Figure shows that the stiffness increased for thedifferential equation when λ = 15.

Now we will check how the numerical methods we discussed above behave

when we solve Example 3.2.1 using these numerical methods. Figure 3.2.5 shows

the solution using Trapezoidal method for λ = 1, 5, 15, 50 and h = 0.25. We

observe that the solutions using numerical methods (here Trapezoidal method has29

Page 41: Analysis and Implementation of Numerical Methods for

1 2 3 4 5t

0.5

0.6

0.7

0.8

0.9

1.0

y

Figure 3.2.4. Figure shows that the differential equation is verystiff when λ = 50.

1 2 3 4 5

0.5

0.6

0.7

0.8

0.9

1.0

Figure 3.2.5. Figure of results from the use of Trapezoidal methodfor numerical solution using λ = 1, 5, 15, 50 for h = 0.25. Gray leveldecreases as the value of the λ increases.

1 2 3 4 5

0.5

0.6

0.7

0.8

0.9

1.0

Figure 3.2.6. Figure of results of the logistic equation for differ-ent values of λ = 1, 5, 15, 50 using StiffnessSwitching command inMathematica. Gray level decreases as the value of λ increases.

been used)does not gives accurate solution when stiffness occurs. But if we used

small step size then we obtained improved solution but only for small interval of t

stiffness occurs, so it is very expensive to use small step size for the entire interval. In30

Page 42: Analysis and Implementation of Numerical Methods for

0 2 4 6 8 10

0.05

0.10

0.50

1

λ=1

λ=50

Figure 3.2.7. Figure of step size h versus t using StiffnessSwitch-ing for logistic equation for λ = 1 (dashed curve) and λ = 50 (linedcurve).

this case, we use combination of two methods for stiff part and non stiff part of the

problem. It will be done by Mathematica using command “StiffnessSwitching”

featured in NDSolve. Generally implicit methods or multistep implicit methods

work better for stiff problems. We mentioned earlier that StiffnessSwitching uses

pairs of two methods. One of them specialized for stiff problems and other method

works for nonstiff problems with less expense. When stiffness occurs it automatically

switches to the method which is dedicated for stiff problems and when the interval

of stiff part is over it switches to the nonstiff method. So, the StiffnessSwitching uses

variable time step but it is not obvious from the graph and from the code exactly

when it is switching back and forth for variable time step. For variable time step,

the method dedicated for stiff problems uses larger step size than the method works

for nonstiff problems. We will show graphical evidence of it when we will use the

StiffnessSwitching feature for a problem. We already mentioned that when the value

of λ is large the logistic equation becomes a stiff problem. From Figure 3.2.5 it is

obvious that the method used to solve this problem does not work accurately. Now

we will check the same problem using StiffnessSwitching command in Mathematica.31

Page 43: Analysis and Implementation of Numerical Methods for

We plot and obtained Figure 3.2.6 using StiffnessSwitching for the same problem

and it is obvious that solution using StiffnessSwitching is more accurate for a stiff

problem. As StiffnessSwitching using variable time step, it is an obvious question

how the step size changes throughout the interval. Figure 3.2.7 shows how step size

h changes for StiffnessSwitching for λ = 1 (equation is nonstiff) and λ = 50 (method

switches to stiff solver when stiffness is detected).

3.3. Systems of Differential Equations

A linear system [19] of first order differential equations has the form as

follows:

dx1

dt= a11(t)x1 + a12(t)x2 + . . . a1n(t)xn + f1(t)

dx2

dt= a21(t)x1 + a22(t)x2 + · · ·+ a2n(t)xn + f2(t)

......

dxndt

= an1(t)x1 + an2(t)x2 + . . . ann(t)xn + fn(t)

where, all aij, i, j = 1, 2, . . . , n are continuous on the given interval and functions of

t. The matrix form of a linear system of differential equations is as follows:

d

dt

x1

x2

. . .

xn

=

a11(t) a12(t) . . . a1n(t)

a21(t) a22(t) . . . a2n(t)

......

an1(t) an2(t) . . . ann(t)

x1

x2

. . .

xn

+

f1(t)

f2(t)

. . .

fn(t)

32

Page 44: Analysis and Implementation of Numerical Methods for

In short hand we write as

X′= AX + F

where X, A(t), F(t) denote the respective matrices

X =

x1(t)

x2(t)

. . .

xn(t)

,A(t) =

a11(t) a12(t) . . . a1n(t)

a21(t) a22(t) . . . a2n(t)

......

an1(t) an2(t) . . . ann(t)

,F(t) =

f1(t)

f2(t)

. . .

fn(t)

Now the if the functions fi(t) = 0, i = 1, 2, . . . n then the system is called homoge-

neous; or else, it is called a nonhomogeneous system.

We can always convert a differential equation of nth order into a system n first

order differential equations introducing n new variables.

Consider the 2nd order differential equation, for a Mass Spring System

my′′

= −by′ − ky (3.6)

where, k > 0 is spring constant ("stiffness" of spring), b > 0 is damping coefficient

and m > 0 is mass.

Now let us introduce two new variables x1(t), x2(t) such that x1(t) = y(t) and

x2(t) = y′(t). Then

x′

1(t) = y′(t) = x2(t)

x′

2(t) = y′′(t) = − b

my′(t)− k

my(t) = − b

mx2(t)− k

mx1(t)

33

Page 45: Analysis and Implementation of Numerical Methods for

So, we obtain a system of first order differential equations consisting of two equations

x′

1(t) = x2(t)

x′

2(t) = − b

mx2(t)− k

mx1(t)

(3.7)

(3.7) is the converted system of first order differential equation of the given second

order diffential equation (3.6). Hence the matrix form of equation (3.7) is

x′(t) = Ax(t)

where

A =

0 1

− km− bm

Eigenvalues of the matrix A are λ1 = −b−

√b2−4km

2mand λ2 = −b+

√b2−4km

2m. We notice

that Re(λ) < 0 and when the product of k and m very small then the stiffness ratio

of the mass spring system become very large i.e.∣∣∣Re(λ)Re(λ)

∣∣∣ is large and implies the

system is stiff. So, high degree of stiffness occurs when b2 is large relative to 4km.

3.4. Predator-Prey Problem

In this part we will discuss the problem predicting the population of two species

sharing the same ecosystem, which is called predator-prey problem[7], one of the

two species is called predator and has the population at time t x2(t) and is feeding on

the other, which is called prey with population at time t x1(t). Our consideration is

that the food supply for prey is always adequate and its growth rate is proportional

to its number alive at that time. So, the growth rate for prey should be k1x1(t).

But death rate of prey depends on both the numbers of prey and predators at any34

Page 46: Analysis and Implementation of Numerical Methods for

time t. So, the death rate is an interaction term of number of prey and predator.

We assume that the death rate of prey is k2x1(t)x2(t). The birth rate of predator

depends on both number of predator and food supply which is in this case number

of prey. So, the growth rate of predator is k3x1(t)x2(t). But death rate of predator

solely depends on number of predators alive at any time t, which is k4x1(t).

As x′1(t) and x′2(t) represent the rate of change of of prey and predator popu-

lations with respect time t, so the predator-prey problem is a nonlinear system of

differential equations, which we can write in the following form.

x′

1(t) = k1x1(t)− k2x1(t)x2(t)

x′

2(t) = k3x1(t)x2(t)− k4x2(t)

(3.8)

These two differential equations are coupled and are called a predator-prey system

of differential equations. Some references refer to it as a Lotka-Volterra system

of differential equations.

For the system of equations (3.8) we obtain implicit solution using separation

of variables. We can write (3.8) as follows

dx1(t)

dt= x1(t)(k1 − k2x2(t))

dx2(t)

dt= x2(t)(k3x1(t)− k4)

35

Page 47: Analysis and Implementation of Numerical Methods for

⇒dx2

dx1

=x2(t)(k3x1(t)− k4)

x1(t)(k1 − k2x2(t))

⇒(k1 − k2x2(t))

x2(t)dx2 =

(k3x1(t)− k4)

x1

dx1

⇒ k1

x2(t)dx2 − k2dx2 = k3dx1 −

k4

x1

dx1

⇒∫

k1

x2(t)dx2 −

∫k2dx2 =

∫k3dx1 −

∫k4

x1

dx1(t)

⇒k1 ln(x2(t))− k2x2(t) = k3x1(t)− k4 ln(x1(t)) + lnC

This is an implicit solution in terms of number of predator and prey. We will find

a numerical solution for this problem and plot the solutions using Mathematica

software. For predator-prey problem there should be oscillation in the population

as the number of prey is large imply plenty of food supply for predator and increases

sharply. When the number of predators increases they consume more prey, which

implies the number of prey decreases and then number of predators also starts

decreasing. But there is an equilibrium situation or steady state situation for both

the predator and prey, which is called an equilibrium solution of the problem. In

this case there is no increase or decrease in predator and prey implies dx1(t)dt

= 0 and

dx1(t)dt

= 0. So, the steady state solution is as follows

x1(t) =k4

k3

, x2(t) =k1

k2

also note that (0,0) is another equilibrium solution of the system.

36

Page 48: Analysis and Implementation of Numerical Methods for

Consider the following problem

x′

1(t) = x1(t)(1.5− x2(t))

x′

2(t) = x2(t)(x1(t)− 2)

(3.9)

So, the implicit solution of the equation is

−1.5 lnx2(t)− x2(t) = x1(t)− 2 lnx1(t) + lnC

and the equilibrium solution of the problem is

x1(t) = 2, x2(t) = 1.5

Using the default Mathematica code for explicit Euler method with initial conditions

x1(0) = 7940

and x2(0) = 1 the solution gives the Figure 3.4.1. Figure 3.4.2 is

0 2 4 6 8 10

1.0

1.5

2.0

2.5

Figure 3.4.1. Using explicit Euler method for Lotka-Volterraproblem (3.9) with step size h = 0.001 dark graph for predator andlight one is for prey for initial conditions x1(0) = 79

40and x2(0) = 1

contour plot for the implicit solution of the system of equation (3.9). Since the

steady state solution of the system is x1(t) = 2, x2(t) = 1.5 the numerical solution

should orbit about the steady state solution. Figures 3.4.2 and 3.4.3 clearly shows

that the solutions orbit about the steady state solution. Figures 3.4.3 and 3.4.537

Page 49: Analysis and Implementation of Numerical Methods for

shows numerical solution using horizontal axis as prey and vertical axis as predator.

Clearly, from the graphs we observe that the number of predators starts increasing

as the number of prey is high but when the predator starts increasing the number

of prey starts decreasing. Now we will see the results from the code we developed

0 1 2 3 4 5

0

1

2

3

4

5

Figure 3.4.2. Phase space plot where horizontal axis representsnumber of prey and vertical axis represents numbers of predator forthe problem (3.9) using implicit solution of the differential equationby Mathematica ContourPlot

1.6 1.8 2.0 2.2 2.4 2.6 2.8

1.2

1.4

1.6

1.8

2.0

2.2

Figure 3.4.3. Numerical solution using backward Euler methodwhere horizontal-axis for prey and vertical-axis for predator with stepsize h = 0.001.

38

Page 50: Analysis and Implementation of Numerical Methods for

1 2 3 4 5 6 7

0.5

1.0

1.5

2.0

2.5

Predator

Prey

Figure 3.4.4. Using implicit Trapezoidal method(using the codesdeveloped for system of two differential equations) the plot of thenumeric discrete solution for Lotka-Volterra problem (3.9) with stepsize h = 0.05 for initial solution x1(0) = 79

40and x2(0) = 1

1.6 1.8 2.0 2.2 2.4 2.6

0.5

1.0

1.5

2.0

Figure 3.4.5. Using implicit Trapezoidal method(using the codesdeveloped for system of two differential equations) the plot of thenumeric discrete solution for Lotka-Volterra equation (3.9) with stepsize h = 0.05 for initial conditions x1(0) = 79

40and x2(0) = 1

39

Page 51: Analysis and Implementation of Numerical Methods for

for implicit Trapezoidal method using step size h = 0.05 and initial conditions

x1(0) = 7940

and x2(2) = 1. We first plot the both the solution x1(t) and x2(t), where

horizontal axis for x1(t) and vertical axis for x2(t). Figure 3.4.3 is obtained when

we connect all the numeric discrete results but the actual solution obtained by the

code is Figure 3.4.5, where the discrete points are not connected by lines. Figure

3.4.3 is x1(t) vs x2(t) but Figure 3.4.4 represents both the solution x1(t) and x2(t)

on the same frame against time t, where dark graph for x1(t) and x2(t) represents

the light one.

3.5. Harmonic Oscillator

Consider the following problem

y′′(t) + ω2 y(t) = 0, 0 < t < b,

y(0) = 1, y′(0) = 0,

(3.10)

where ω is a real scalar and the equation has the solution

y(t) = cos ωt.

This problem is an harmonic oscillator problem. As explained in Asher [2], for high

frequency, ω � 1 the derivatives of the solution are getting bigger and bigger, since

‖y(p)‖ = ωp

40

Page 52: Analysis and Implementation of Numerical Methods for

And the local truncation error of any method of order p is

O(hpωp+1)

As the error term for highly oscillatory problems involved product of step size,h and

frequency, ω and therefore the product hω has to be less than 1 to put a restriction

on error. If the product hω is 1 or bigger then the error gets unbound. So, if we

want to compute approximate solution we need to restrict

hω < 1

regardless the order of the method. Otherwise, the error will go bigger and bigger.

We will show the approximate result without restricting the step size to demonstrate

the matter discussed above.

Now we want to check the behavior of the problem for different values of ω. To

do that we will convert the second order differential equation into a system of two

first order differential equations.

Let us consider x1(t) = y(t) and x2(t) = y′(t) then x′1(t) = y

′(t) = x2(t) and

x′2(t) = y

′′(t) = −ω2x1(t). So, we obtain the following system of equations

x′

1(t) = x2(t)

x′

2(t) = −ω2x1(t)

The initial conditions of the system are x1(0) = 1 and x2(0) = 0. In the system

component x1(t) represents the position function and x2(t) represents the velocity

41

Page 53: Analysis and Implementation of Numerical Methods for

function. So the above system of equation can be written as

x′(t) = Ax(t)

where

A =

0 1

−ω2 0

, x(t) =

x1(t)

x2(t)

, x′(t) =

x′1(t)

x′2(t)

Therefore, the eigenvalues of A are −iω and iω and so the real part of the eigenvalues

is zero, which means that the system is not stiff for any values of ω. Now the

eigenvectors of the A corresponding to −iω and iω are ( iω, 1)T and

1

,

− iω

1

respectively. Hence the general solution of the system (3.10) is

x(t) = c1

1

e−iωt + c2

−iω1

eiωt

= c1

1

[cos(wt)− i sin(wt)] + c2

−iω1

[cos(wt) + i sin(wt)]

= c1

iwcos(wt) + 1

wsin(wt)

cos(wt)− i sin(wt)

+ c2

− iwcos(wt) + 1

wsin(wt)

cos(wt) + i sin(wt)

=

(c1 − c2)i cos(ωt)ω

+ (c1 + c2) sin(ωt)ω

(c1 + c2)cos(wt)− (c1 − c2)i sin(wt)

Now we replace (c1 − c2)i by C1 and c1 + c2 by C2 and write x(t) component wise

and obtain

x1(t) = C1cos(ωt)ω

+ C2sin(ωt)

ω

x2(t) = C2 cos(ωt)− C1 sin(ωt)

42

Page 54: Analysis and Implementation of Numerical Methods for

Now using the initial conditions we get C1 = ω and C2 = 0. So, we obtain the

solution

x1(t) = cos(ωt)

x2(t) = −ω sin(ωt)

Figure 3.5.1 for the first component of the converted system of equations, which is

in fact the exact solution of equation (3.10), of the given harmonic second order

equation (3.10) for the value of ω = 20. and Figure 3.5.2 using numeric result

generated by the difference method for step size h = 0.005. We used Trapezoidal

method for system of two differential equations. We noticed from the two graphs

that the difference method works perfectly even for large values of ω. To check it

we can plot two graphs in the same frame. Figure 3.5.3 is combined plot of Figure

3.5.1 and Figure 3.5.2 for first component of the solution of the system and from

the plot we observe that two graphs perfectly fit on each other. But Figure 3.5.4

shows the discrepancy between the exact solution and the numerical solutions. We

used ω = 20 and h = 0.015 and the product hω = 0.3 and yet the numeric method

does not gives accurate results like we discussed earlier in this section.

3.6. Conditions Under Which Newton Iteration Converges When

Applied for Backward Euler or Trapezoidal Method

When we attempt to solve a differential equation or a system of differential

equations, we encounter either linear equation or nonlinear equation(or system of43

Page 55: Analysis and Implementation of Numerical Methods for

0.5 1.0 1.5 2.0 2.5

-1.0

-0.5

0.5

1.0

y(t) = cos(20 t)

Figure 3.5.1. Plot of exact solution y(t) = cos(wt) of the secondorder differential equation (3.10) for ω = 20.

0.5 1.0 1.5 2.0 2.5

-1.0

-0.5

0.5

1.0

Numeric Solution

Figure 3.5.2. Plot of numeric solution of the converted systemof equations of second order differential equation (3.10) using Trape-zoidal method for system of two equations for ω = 20 and step size h= 0.005.

0.5 1.0 1.5 2.0 2.5

-1.0

-0.5

0.5

1.0

y(t) = cos(20 t)

Numeric Solution

Figure 3.5.3. Combined plot of exact solution and numeric solu-tion using trapezoidal method of the second order ODE (3.10) for ω= 20 and step size h = 0.005.

equations). If it is linear then we can approximate the next approximation by

substituting the previous approximation but if we use implicit methods for nonlinear

case it is not that simple. That is, for nonlinear case using implicit methods we44

Page 56: Analysis and Implementation of Numerical Methods for

0.5 1.0 1.5 2.0 2.5

-1.0

-0.5

0.5

1.0

Exact Solution

Numeric Solution

Figure 3.5.4. Plot of exact solution and numeric solution usingtrapezoidal method of the second order ODE (3.10) for ω = 20 andstep size h = 0.035.

need to use a root finding method and we have many methods to find the root

of the nonlinear equations. Newton’s iterative method is one of them and to find

the roots for backward Euler method or trapezoidal method we use the Newton’s

iterative method in our code. Now we know that Newton’s method is a special

type of fixed point method and one obvious question may arise and that is " Does

Newton’s iterative method converge always?". If not then what is the conditions

that will guarantee for the convergence of Newton’s method? We will give some

idea about the conditions under which Newton’s method converges. Since there is

an involvement of fixed point iteration method and Newton’s method, so we need a

theorem which is Fixed Point Theorem and it is as follows:

Theorem 3.6.1 ((Fixed Point Theorem)). Let g ∈ C[a, b] be such that g(x) ∈

[a, b], for all x in [a, b]. Suppose, in addition, g′ exists on (a, b) and that a constant

0 < k < 1 exists with

∣∣∣g′(x)∣∣∣ ≤ k for all x ∈ (a, b).

45

Page 57: Analysis and Implementation of Numerical Methods for

Then for any number p0 in [a, b], the sequence defined by

pn = g(pn−1), n ≥ 1,

converges to the unique fixed point p in [a, b]. [7]

To develop our codes to solve differential equations by different difference meth-

ods Newton’s method would be applicable when using an implicit method on a

nonlinear equation. When we use Newton’s method to approximate wn+1, we are

approximating a solution of a nonlinear algebraic equation(or system of such equa-

tions). In Burden and Faires [7] a discussion is given how Newton’s method works

for trapezoidal method. We will show how this works for backward Euler method.

So, consider the backward Euler method

wn+1 = wn + hf(tn+1, wn+1),

When we already have tn, tn+1 and wn computed, we need to get wn+1, which is the

solution to

F (w) = w − wn − hf(tn+1, w) = 0. (3.11)

We use Newton’s method to obtain the solution of the equation (3.11) and to ap-

proximate the required solution we choose the initial guess w(0)n+1, generally as wn,

46

Page 58: Analysis and Implementation of Numerical Methods for

and generate w(k)n+1 for k = 1, 2, 3, . . . . According to Newton’s method, we have

w(k)n+1 = w

(k−1)n+1 −

F (w(k−1)n+1 )

F ′(w(k−1)n+1 )

= w(k−1)n+1 −

w(k−1)n+1 − wn − hf(tn+1, w

(k−1)n+1 )

1− hfy(tn+1, w(k−1)n+1 )

.

We continue until the difference between two successive results w(k)n+1 and w(k−1)

n+1 is

sufficiently small. We apply Newton’s method to the equation (3.11) and according

to Newton’s method, let us consider that

G(w) = w − F (w)

F ′(w)

Hence

G′(w) =

−h(w − wn − hf(tn+1, w))(fww(tn+1, w))

(1− hfw(tn+1, w))2

Now according to the fixed point theorem we guarantee that Newton’s iteration

converges if the function G(w) maps a given interval into itself and there exists k

such that 0 < k < 1 with

∣∣∣∣−h(w − wn − hf(tn+1, w))(fww(tn+1, w))

(1− hfw(tn+1, w))2

∣∣∣∣ ≤ k

3.6.1. Advantages and Disadvantages of Various Classes of Runge-

Kutta Methods and Linear Multistep Methods (Adams Methods, BDF

Methods): Family of Runge-Kutta methods are very important in numerical anal-

ysis. The advantages of Runge-Kutta methods are as follows:47

Page 59: Analysis and Implementation of Numerical Methods for

• Runge-Kutta methods are flexible for both the stiff and nonstiff problems.

We want to note that for a stiff problem implicit Runge-Kutta methods are

useful.

• No other additional method is required to find extra intial values.

• change of step size is straightforward.

Although Runge-Kutta methods have many merits the disadvantage of these meth-

ods is for nonstiff problems, PECE(predict, evaluate, correct, evaluate) variants of

Adams method pairs can prove less expensive than Runge-Kutta method if high

accuracy is desired, especially if the evaluations of the functions dominate the cost.

The advantages of linear multistep methods such as methods of Adams family,

Backward Differentiation Formulas, are as follows:

• BDF methods particularly very useful for stiff problems.

• They are an obvious choice for direct discretization for differential algebraic

equations(DAEs).

• The algebraic system that must be solved for stiff problems has minimal

size when using BDF methods, making them a relatively cheap and popular

choice in practice.

The disadvantage of the multistep methods is that more than one starting value

needs to be known before start of the methods and another method needs to be

used to get those starting values.

48

Page 60: Analysis and Implementation of Numerical Methods for

CHAPTER 4

Discretization of Partial Differential Equations

4.1. Difference Formulas and Other Preliminaries

To discretize a partial differential equation we have several methods. In all

these methods difference formulas play a vital role. There are three types of

difference formula. Forward difference formula, Backward difference formula, and

Centered difference formula.

The construction of the forward difference formula, uses the forward dif-

ference notation ∆ and is defined as

∆hf(x) =f(x+ h)− f(x)

h

where h is the step size and it can be a constant or variable.

For backward difference formula we use the notation ∇ and instead of the values

x + h and x in the forward difference formula we use x and x − h. So, we have an

expression of the form

∇hf(x) =f(x)− f(x− h)

h

Finally, for the centered difference formula we use the notation δ has the definition

as follows:

δhf(x) =f(x+ h)− f(x− h)

2h

49

Page 61: Analysis and Implementation of Numerical Methods for

Relation between difference formulas and the derivatives:

u(x+ h)− u(x)

h= u

′(x) +O(h)

u(x)− u(x− h)

h= u

′(x) +O(h)

u(x+ h)− u(x− h)

2h= u

′(x) +O(h2)

u(x+ h)− 2u(x) + u(x− h)

h2= u

′′(x) +O(h2)

We will use above difference formulas to discretize partial differential equations.

We will define a special kind of matrix before we start discretization.

Definition. An n × n matrix is said to be a band matrix if there exists integers p

and q such that 1< p, q < n and aij = 0 whenever p ≤ j − i or q ≤ i − j. Band

width of the matrix is defined by w = p+ q − 1.

Here the number p describes the number of diagonals above and including the

main diagonal on which entries are nonzero, and the number q describes the numbers

of diagonals below and including the main diagonal on which entries are nonzero.

Normally, p and q are small numbers.

Example 4.1.1. Consider the following matrix

2 1 0 0

1 4 1 0

0 1 4 1

0 0 1 2

50

Page 62: Analysis and Implementation of Numerical Methods for

This is a band matrix with p = q = 2 and bandwidth =2 + 2−1 = 3. A matrix with

bandwidth 3 is called tridiagonal.

We also want to state three important results about the eigenvalues of certain

matrices.

Theorem 4.1.1. [17] A tridiagonal matrixa b

c a. . .

. . . . . . b

c a

∈ R(N−1)×(N−1)

with numbers a, b, c ∈ R, b.c > 0, has the following eigenvalues:

λn = a+ 2sgn(c)√bc cos(

N), n = 1, . . . , N − 1. (4.1)

Theorem 4.1.2. (Weyl’s Inequality) [6] Let A,B be n×n symmetric ma-

trices and for each i = 1, 2, 3, . . . , n, the eigenvalues of A and B be ordered by

λ1(A) ≤ λ2(A) ≤ · · · ≤ λn(A)

and λ1(B) ≤ λ2(B) ≤ · · · ≤ λn(B)

Then for any k = 1, 2, 3, . . . , n,

λk(A) + λ1(B) ≤ λk(A+B) ≤ λk(A) + λn(B) (4.2)

λk(B) + λ1(A) ≤ λk(A+B) ≤ λk(B) + λn(A)

51

Page 63: Analysis and Implementation of Numerical Methods for

Theorem 4.1.3. (Gersgorin Circle)[7]

Let A be an n× n matrix and Ri denotes the circle in the complex plane centered

at aii and radius∑n

j=1,j 6=i |aij|; that is

Ri ={z ∈ C : |z − aii| ≤

n∑j=1,j 6=i

|aij|}

where C denotes the complex plane. The eigenvalues of A are contained within the

union of these circles. The union of any k of the circles that does not intersect the

remaining (n− k) contains precisely k of these eigenvalues.

4.2. Stiff Differential Systems in Some Applications:

Consider the initial value problem for the heat equation in one spacial dimen-

sion which is given by

∂u

∂t=∂2u

∂x2, 0 ≤ x ≤ 1, t ≥ 0 (4.3)

initial condition

u(x, 0) = f(x), 0 ≤ x ≤ 1

where f : [0, L] → R is a given function and the boundary conditions are as given

in [15]

a0∂u

∂x+ b0u = λ0(t), x = 0, t ≥ 0

a1∂u

∂x+ b1u = λ1(t), x = 1, t ≥ 0

52

Page 64: Analysis and Implementation of Numerical Methods for

We assume that no discontinuity exists in the initial condition and

a0f′(0) + b0f(0) = λ0(0)

a1f′(1) + b1f(1) = λ1(0)

where ′ denotes the derivative with respect to x. Hence, there is no inconsistency

between the initial condition and boundary conditions. λ0(t) and λ1(t) are functions

of t and assume that these functions are continuous and bounded.

We want to approximate the function u(x, t) numerically. Consider the equidis-

tant mesh points

xj = j∆x, j = 1, 2, . . . , N − 1 (∆x =1

N)

Using the second order centered differences we can approximate the second order

partial derivatives as follows

∂2u

∂x2(xj, t) =

u(xj+1, t)− 2(uj, t) + u(xj−1, t)

(∆x)2+O(∆x2).

For yj(t) ≈ u(xj, t) and neglecting the terms O(∆x2) we get the following system of

N − 1 differential equationsy′j(t) = 1

(∆x)2{yj+1(t)− 2yj(t) + yj−1(t)}, t ≥ 0

yj(0) = f(xj), j = 0, 1, . . . , N

For j = 0 and h = ∆x we have,

y′

0(t) =1

h2{y1(t)− 2y0(t) + y−1(t)} (4.4)

53

Page 65: Analysis and Implementation of Numerical Methods for

In the above equation, there is a point which is outside of our range on the left side

of boundary. This point is called ghost point.This ghost point can be eliminated

by using the following boundary condition

a0

2h(y1(t)− y−1(t)) + b0y0(t) = λ0(t)

By substituting y−1(t) = 2hb0y0(t)a0

− 2hλ0(t)a0

+ y1(t) in the equation (4.4) we obtain,

y0′(t) = − 1

h2{(− 2b0

a0N+ 2)y0(t)− 2y1(t) +

2λ0(t)

a0N} (4.5)

Similarly, for j = N we get another ghost point yN+1 in the following equation

y′

N(t) =1

h2{yN+1(t)− 2yN(t) + yN−1(t)} (4.6)

Using first order centered differences in the second boundary condition, we get,

a1

2h(yN+1(t)− yN−1(t)) + b1yN(t) = λ1

Solve for yN+1(t) we get,

yN+1(t) =2hλ1

a1

− 2hb1yN(t)

a1

+ yN−1(t)

Put yN+1(t) in equation (4.6) and we obtain,

y′

N(t) = − 1

h2{−2yN−1(t) + (

2b1

a1N+ 2)yN(t)− 2λ1

a1N} (4.7)

And for j = 1, 2, . . . , N − 1,

y′

j(t) = − 1

h2{−yj−1(t) + 2yj(t)− yj+1(t)} (4.8)

54

Page 66: Analysis and Implementation of Numerical Methods for

The matrix form of the differential equations (4.5), (4.7) and (4.8) is as follows:

dY

dt= − 1

h2[MY +G(t)] (4.9)

where, M is an (N + 1) × (N + 1) matrix given by (4.10) and G(t) is an N + 1

dimensional column vector given by (4.11).

M =

− 2b0a0N

+ 2 −2

−1 2 −1

−1 2 −1

. . . . . . . . .

−1 2 −1

−2 2b1a1N

+ 2

∈ R(N−1)×(N−1) (4.10)

and

G(t) =

2λ0(t)a0N

0

0

...

−2λ1(t)a1N

(4.11)

Now according to Theorem 4.1.3 the eigenvalues of the matrix − 1h2M lie in union

of the circles centered at − 2h2, 2b0a0h− 2

h2and − 2b1

a1h− 2

h2on the real axis having radius

2h2

for all the circles. Now depending on the step size h the center and the radius

of the circles changes. According to the the Theorem 4.1.3 the eigenvalues can be

anywhere in the union of the circles and we cannot say the system is a stiff system

as eigenvalues can have a positive real part.

55

Page 67: Analysis and Implementation of Numerical Methods for

-4 -3 -2 -1Real Axis

-2

-1

1

2

Imaginary Axis

Figure 4.2.1. Figure of circles center at − 2h2, ( 2b0

a0h− 2

h2) and

(− 2b1a1h− 2

h2) on the real axis having radius 2

h2for h = 1, b0 = 1

2,

a0 = 4, a1 = 8 and b1 = 1.

4.3. Nonhomogeneous Heat Equation

[1] Consider the following heat equation

ut − uxx = f(u) 0 < x < 1, t > 0 (4.12)

with boundary conditions

u(0, t) = u(1, t) = 0, t > 0, (4.13)

and initial condition

u(x, 0) = u0(x), 0 < x < 1. (4.14)

where, u0(x) is a given function. The function u(x, t) is to be approximated nu-

merically and to do so we want to semi-discretize the given PDE (4.12) using the

method of lines.

Let xj = j∆x, j = 1, 2, . . . , N − 1 (∆x = 1N), applying centered difference

approximation to second order partial derivative with respect to x in PDE (4.12)56

Page 68: Analysis and Implementation of Numerical Methods for

we get a system of equations as follows:

dyjdt

=yj+1(t)− 2yj(t) + yj−1(t)

h2+ f(yj(t)) for j = 1, 2, . . . , N − 1

yj(0) = u0(xj)

(4.15)

The matrix form of the above system of ODE is

Y′(t) = AY(t) + F(Y(t)) (4.16)

where, A is an (N − 1) × (N − 1) matrix given by (4.17) Y(t) and F are N − 1

dimensional column vectors.

Y(t) = (y1(t), y2(t), . . . , yN−1(t))T

F(Y(t)) = (f(y1(t)), f(y2(t), . . . , f(yN−1(t))T

and

A = − 1

h2

2 −1

−1 2 −1

−1 2 −1

. . . . . . . . .

−1 2 −1

−1 2

∈ R(N−1)×(N−1) (4.17)

57

Page 69: Analysis and Implementation of Numerical Methods for

Now we need the Jacobian matrix of the right hand side of (4.12) to determine the

stiffness of the system of ODE(4.16) and the Jacobian is as follows:

J = − 1

h2

2 −1

−1 2 −1

−1 2 −1

. . . . . . . . .

−1 2 −1

−1 2

+

fy1(y1)

fy2(y2)

. . .

fyN−1(yN−1)

=−1

h2

2− h2fy1(y1) −1

−1 2− h2fy2(y2) −1

. . . . . . . . .

−1 2− h2fyN−2(yN−2) −1

−1 2− h2fyN−1(yN−1)

(4.18)

According to Theorem 4.1.1 and Theorem 4.1.2 the eigenvalues of the tridiagonal

matrix (4.18) satisfy the following inequalities:

− 4

h2sin2(

2N) + min

1≤k≤nfyk(yk) ≤ λj(J) ≤ − 4

h2sin2(

2N) + max

1≤k≤nfyk(yk) (4.19)

Now from the stiffness ratio we know that large stiffness ratio gives a stiff system.

Larger the stiffness ratio more stiff the system is. From the eigenvalues above the

stiffness of the system depends on the partial derivatives of the function f(u) and

4h2sin2( jπ

2N). For the smaller spacial step size the largest eigenvalue of the system

is close to −π2 + max1≤k≤n

fyk(yk) and smallest eigenvalue approaches to −∞. But if

58

Page 70: Analysis and Implementation of Numerical Methods for

we consider the magnitude of the eigenvalues then largest one approaches ∞ and

depending on the behavior of f and y the smallest one could be finite and positive.

Which means the stiffness ratio approaches∞ and the problem become increasingly

stiff as h → 0. So, we conclude that the small spacial step size could produce a

large and very stiff system. But for relatively large spacial step size the stiffness

depends on the partial derivatives of the function f(u). So, the stiffness depends on

the nature of the given function and on the solution itself.

Example 4.3.1.

ut − uxx = sinu 0 < x < 1, t > 0 (4.20)

Using the equation (4.19) the eigenvalues of the Jacobian matrix of the PDE (4.20)

satisfy the following inequalities

− 4

h2sin2

( jπ2N

)− 1 ≤ λj ≤ −

4

h2sin2

( jπ2N

)+ 1

So, the smallest possible eigenvalue of the system approaches −∞ and the largest

eigenvalue could be close to −π2 + 1 as h → 0 and the system becomes very stiff.

And for approximate solution of a stiff system we need an implicit difference method

in vector form.

59

Page 71: Analysis and Implementation of Numerical Methods for

CHAPTER 5

Experimental Estimation of the Order of Numerical

Methods

5.1. Error Analysis

We will perform computations to provide experimental support for the order of

certain methods, as well as attempt, using a strategy outlined, for example in [13], to

determine optimal step size selection when an exact solution is not known. If y(t) is

the exact solution of a differential equation and wn be the numerical approximation

at the nth step, then the actual error at that step is

En(h) = y(tn)− wn

and it is obvious from the above equation that the actual error depends on the step

number and step size. The behavior of actual error En(h) is different for different

methods. More particularly the actual error of the method depends on the order of

the method. In this section we will show the order of the method from the numerical

results of the method under discussion. We will use backward Euler method and

Implicit Trapezoidal method and Runge-Kutta 4th order method.

In Example 5.1.1 we compute the numerical results and errors involved using

different number of steps in the the interval [a, b] using backward Euler method,

60

Page 72: Analysis and Implementation of Numerical Methods for

Implicit Trapezoidal method and Runge-Kutta method. Number of steps varies as

2N for N = 0, 1, 2, 3, . . . and hence the step size varies as h = b−a2N

. In the Table

5.1.1 we list both the numerical approximations wh(b) and error, |E(h)|, defined by

|E(h)| = |y(b)− wh(b)|

at the right-hand endpoint, where wh(b) denotes the approximation obtained at the

right-hand endpoint using step size h.

Example 5.1.1. For y′(t) = 5e5t(y− t)2 +1, find y(1) with the initial condition

y(0) = −1. Exact solution is y(t) = t− e−5t and the value of exact solution at t = 1

is y(1) = 1− e−5=0.99326205300091.

From Table 5.1.1 we notice that the error decreases as the number of steps

increases. In other words, as the step size decreases the error decreases as well.

But we also notice something different(*) in the lower right corner in the Table

5.1.1 for Runge-Kutta method with error |E(h)|. We can see that the error |E(h)|

started increasing and this is because of the finiteness of the computation. If we

observe carefully we can see that the behavior of the error |E(h)| is different for

different methods. More particularly, Runge-Kutta method is better than implicit

trapezoidal method and implicit trapezoidal method is better than backward Euler

method. The better methods converge more rapidly and give much smaller errors for

any given N . We know from the Taylor series approximation that the error |E(h)|

varies as a power of step size h. For backward Euler method, implicit trapezoidal61

Page 73: Analysis and Implementation of Numerical Methods for

method and Runge-Kutta method that power is 1, 2, and 4 respectively. That is

Backward Euler method, E(h) ≈ CBEh.

Implicit Trapezoidal method, E(h) ≈ CITh2

Runge-Kutta method, E(h) ≈ CRKh4

(5.1)

If we plot the graphs of E(h) versus h using equation (5.1) for these methods

considering CBE, CIT and CRK all are 1 to emphasize on the power of h then the

62

Page 74: Analysis and Implementation of Numerical Methods for

Table 5.1.1. Approximation wh(b) and actual Error |E(h)| =|y(b)− wh(b)| for y′(t) = 5e5t(y − t)2 + 1

Noofsteps

Backward Euler Implicit Trapezoidal Runge-Kutta

1 0.9639580841782 0.9487223281263 9.54109 × 1012 (wh(b))0.0293039688227 0.0445397248746 9.54109 × 1012 (|E(h)|)

2 0.9801829938436 1.0072881532092 3.02196678 × 1051

0.0130790591573 0.0140261002083 3.0219668 × 1051

4 0.9876640831940 0.9940199107034 1.69141563 × 10394

0.0055979698069 0.0007578577025 1.69141563 × 10394

8 0.9907925390206 0.9934727485841 0.993144456098670.0024695139803 0.0002106955832 0.0001175969022

16 0.9921199797510 0.9933160757923 0.99325622132990.0011420732499 0.0000540227914 5.831671050 × 10−6

32 0.9927150039156 0.9932756415626 0.993261750117350.0005470490853 0.0000135885617 3.028835621 × 10−7

64 0.9929945741083 0.9932654552886 0.99326203585020.0002674788926 3.4022877076 × 10−6 1.715072107 × 10−8

128 0.9931298267199 0.9932629038952 0.99326205198170.0001322262811 8.508942547 × 10−7 1.01918529 × 10−9

256 0.9931963180702 0.9932622657445 0.99326205293880.0000657349307 2.127436243 × 10−7 6.209854853 × 10−11

512 0.9932292800562 0.9932621061798 0.99326205299710.0000327729447 5.317886864 × 10−8 3.831823747 × 10−12

1024 0.9932456901372 0.9932620662977 0.99326205300070.0000163628637 1.329673938 × 10−8 2.378097719 × 10−13

2048 0.9932538774825 0.9932620563251 0.99326205300098.175518388 × 10−6 3.324220677 × 10−9 1.476596623 × 10−14

4096 0.9932579667181 0.9932620538320 0.99326205300094.086282784 × 10−6 8.310543365× 10−10 9.992007222 × 10−16

8192 0.9932600102286 0.9932620532087 0.99326205300092.042772330 × 10−6 2.077635841× 10−10 1.110223025 × 10−16

16384 0.9932610317070 0.9932620530529 0.99326205300091.021293901 × 10−6 5.194455977× 10−11 7.771561172 × 10−16(∗)

32768 0.9932615423770 0.9932620530139 0.99326205300095.106238856 × 10−7 1.297639773× 10−11 3.108624469 × 10−15(∗)

65536 0.9932617976947 0.9932620530043 0.99326205300092.553061889 × 10−7 3.370526081× 10−12 9.992007222 × 10−16(∗)

63

Page 75: Analysis and Implementation of Numerical Methods for

Figure 5.1.1 shows theoretical behavior of the error for these method. But if we

plot the calculated E(h) versus h for y′(t) = 5e5t(y− t)2 + 1 for these methods then

0.2 0.4 0.6 0.8 1.0h

0.5

1.0

E(h)

BEM

ITM

RKM

Figure 5.1.1. Plot of claimed E(h) versus h for backward Eulermethod, implicit trapezoidal method, and Runge-Kutta method.

Figure 5.1.2 shows the behavior of the errors. For the power of h involved in the

equation of error function E(h) the backward Euler method is known as a first order

method, implicit trapezoidal method is a second order method and Runge-Kutta

method is a fourth order method. The higher order method gives more accurate

approximation. But in the higher order method more computations required and

consequently more costly to compute each step than lower order method. Now, if

0.02 0.04 0.06 0.08 0.10 0.12

0.0005

0.0010

0.0015

0.0020

0.0025

BEM

ITM

RKM

Figure 5.1.2. Plot of acctual error, E(h) versus h for backwardEuler method, implicit trapezoidal method, and Runge-Kutta methodfor the equation in Example 5.1.1.

64

Page 76: Analysis and Implementation of Numerical Methods for

the order of a method is p then we can write from the equation (5.1),

E(h) ≈ Chp

E(h/2) ≈ C(h/2)p

(5.2)

By using the properties of the logarithm in equation (5.2) we get,

p ≈ ln|E(h)| − ln|E(h/2)|ln2

(5.3)

The Equation (5.3) gives an approximation to the order of the error of a method.

We used h and h/2 to derive the Equation (5.3) but for any positive integer m, using

h and h/m we can derive the formula for an approximation to the order of a method.

From Equation (5.3) we expect the order of the method approaches to an integer as

h→ 0. Now using the Equation (5.3) and the errors using backward Euler method,

implicit trapezoidal method and Runge-Kutta method in Table 5.1.1. for Example

5.1.1. we get the tabulated order which is given in Table 5.1.2. We observe that as

the number of steps increases the estimated order of the method approaches to an

integer for all these methods. But implicit trapezoidal method approaches to that

anticipated order p faster than backward Euler method and Runge-Kutta method

faster than other two methods. Similar to the Table 5.1.1. something interesting

happens for the Runge-Kutta method in the lower right corner in the table. This

happens due the finiteness of the computations.

65

Page 77: Analysis and Implementation of Numerical Methods for

Table 5.1.2. Order of Error p = ln|E(2h)|−ln|E(h)|ln2

for y′(t) = 5e5t(y−t)2 + 1

Noofsteps

BackwardEuler

ImplicitTrapezoidal

Runge-Kutta

2 1.163873 1.666979 -127.8965294 1.224283 4.210043 -1138.5840808 1.180677 1.846767 1322.65176316 1.112572 1.963520 4.33379732 1.061913 1.991175 4.26707464 1.032245 1.997816 4.142422128 1.016416 1.999453 4.072781256 1.008277 1.999864 4.036713512 1.004155 2.000191 4.0184561024 1.002080 1.999781 4.0101512048 1.001043 1.999984 4.0094604096 1.000521 2.000001 3.8853578192 1.000261 2.000000 3.16992516384 1.000130 1.999898 -2.80735532768 1.000065 2.001083 -2.00000065536 1.000033 1.944844 1.637430

5.2. Handling Order in Practice

In Example 5.1.1 we talked about the approximations and we estimated the

order of the methods. And we choose a differential equation that we know the

analytical solution for the given differential equation and it was easy to analyze the

error. But it will be interesting to know the behavior of the errors when we do

not have the analytical solution given. We will examine this issue in this section,

following a process similar to that outlined in [13].

Since we do not know the analytical solutions for the given problem and want

to approximate the solution wn for the given problem for t ∈ [a, b] with y(t0) = y0.

To use a method to approximate the solution it is also important to choose the66

Page 78: Analysis and Implementation of Numerical Methods for

step size for a better approximation. As we have seen from Table 5.1.2 that when

errors started to getting worse the approximated order of the method also started

deviating from the expected integer. So, an appropriate step size is very important

for an approximate solution. We will use the concept of the expected integer order

to find the optimal step size for a given method. To do so we will compute the

solution of the problem at t = b using numerical methods for different step halving

each time. We will check when an approximate optimal step size is obtained by

observing the expected estimated order of the method. We know that the small

step size gives us better solution but we cannot continue the computation forever as

after a certain time the solution not necessarily continue to converge anymore. In

fact, after a specific step size the approximate solution starts contaminated by the

round-off error of the computations. If we continue after that specific step size the

error start creeping up.

What can we do to avoid this situation? We will use the concept of the order

of the method to get an optimal step size to obtain a better approximate solution

wn. As we are approximating the solution of the problem using different step size

h = b−a2N

for N = 0, 1, . . . at right-hand endpoint b, we suppose the approximate

solution is wh(b), which is equals to the actual solution y(b) plus the error E(h),

behaves as follows:

wh(b) ≈ y(b) + Chp

67

Page 79: Analysis and Implementation of Numerical Methods for

where p is the order of the method, h is the step size, we assume h is small but not

too small that round-off error dominates the solution, and C is unknown. Our aim

to find the optimal step size h.

Suppose Dh is the difference of two successive approximations wh(b) and wh2(b)

that is

Dh =wh(b)− wh2(b) ≈ Chp − C

(h2

)p=Chp(1− 2−p)

Replacing h by h2we get,

Dh2≈ C

hp

2p(1− 2−p)

Using the logarithm properties we have,

ln|Dh| − ln|Dh2| ≈ ln2p

=p ln2

That is,

p ≈ln|Dh| − ln|Dh

2|

ln2=

ln|Dh/Dh2|

ln2(5.4)

The equation (5.4) and (5.3) looks like have the same format but the advantage

of the equation (5.4) is that we do not need to know the exact solution of the

problem. During the process to find the order p using equation (5.4) we need to

find the approximate solution wh(b) for h = b−a2N

where, N = 0, 1, 2, 3, . . . and we

observe that we cannot calculate the order p until third approximation as we need

68

Page 80: Analysis and Implementation of Numerical Methods for

wh, wh2, wh

4for each calculation of p. In the following example we find the range of

steps for roughly expected p using formula (5.4).

Example 5.2.1. Find the optimal step size or the range of steps for y′(t) =

t− y2 with y(0) = 0.

From the Table 5.1.2. we observe that for backward Euler method(BEM)

approaching order 1 and accuracy still improving. But for implicit trapezoidal

method(ITM) and Runge-Kutta method(RKM) something interesting started hap-

pening in the bottom of the respective column. For ITM it starts late whereas for

RKM it starts in the early stage. Generally, smaller step size gives better approxi-

mations but the question is "How small the step size can be?". From the Table 5.1.2.

we can say for lower order we can use smaller step size than a higher order method.

To support this claim, we see for BEM the range of steps near around 65536 but for

ITM the range is 1024− 2048 and for RKM 512− 1024. After this range the results

started getting worse. Now, the question is "Why is this happening?".

We generally think that smaller step size gives better approximation. But in

reality this is not true. For computations computers rounds up (or down) and there

is always computational error involve and for too many computations this error

contributes the term Chto the error of the methods E(h) [13]. So E(h) has the form

69

Page 81: Analysis and Implementation of Numerical Methods for

Table 5.2.1. Order p = ln|D4h|−ln|D2h|ln2

for y′(t) = t− y2

Noofsteps

Backward Euler ImplicitTrapezoidal Runge-Kutta

1 1.0000000000 1.0000000000 1.0813395182292(≈ wh(b))

2 1.0401660864 1.0653022778 1.07969645874384 1.0606979230 1.0781004093 1.0817791280533

0.9681 2.3512 -0.3420 (≈ p)8 1.0711477717 1.0809795631 1.0819046102429

0.9744 2.1522 4.052916 1.0767808803 1.0816802824 1.0819117444325

0.9712 2.0387 4.136632 1.0791804093 1.0818542924 1.0819121650930

0.9800 2.0097 4.084064 1.0805423891 1.0818977221 1.0819121905753

0.9885 2.0024 4.0451128 1.0812262837 1.0819085750 1.0819121921425

0.9939 2.0006 4.0232256 1.0815689826 1.0819112880 1.0819121922396

0.9968 2.0002 4.0118512 1.0817405231 1.0819119662 1.0819121922457

0.9984 2.0000 4.00581024 1.0818263415 1.0819121357 1.0819121922460

0.9992 2.0000 4.00112048 1.0818692629 1.0819121781 1.0819121922461

0.9996 2.0000 4.01794096 1.0818907265 1.0819121887 1.0819121922461

0.9998 2.0000 3.25488192 1.0819014591 1.0819121914 1.0819121922461

0.9999 2.0000 1.137516384 1.0819068256 1.0819121920 1.0819121922461

0.9999 1.9998 -1.263032768 1.0819095089 1.0819121922 1.0819121922461

1.0000 2.0000 -0.222465536 1.0819108506 1.0819121922 1.0819121922461

1.0000 2.0004 -0.5850

as follows:

E(h) ≈ C1hp +

C2

h(5.5)

Therefore, when the step size is too small then the second term dominates the

first term and the error starts to increase and for the same reason the order of the

70

Page 82: Analysis and Implementation of Numerical Methods for

methods start deviating from the expected integer. The graph of the equation (5.5)

looks like Figure 5.2.1.

Figure 5.2.1. Plot of E(h) versus h when computational finitenesscontributes to the error of the methods.

71

Page 83: Analysis and Implementation of Numerical Methods for

CHAPTER 6

Numerical Approximation for Second Order Singular

Differential Equations

6.1. Lane-Emden Equation and Series Solutions

Consider the following second order singular ordinary differential equation

y′′(t) +2

ty′(t) + yn(t) = 0 (6.1)

with initial conditions y(0) = 1, y′(0) = 0. This second order singular ordinary

differential equation is known as the Lane-Emden equation. [16] It is an important

equation in astrophysics. We will find the approximate solution using one of the

numerical methods that we discussed earlier as well as another one, but before that

we will also find the series solution of the equation to check closed form solution for

various n values.

Let us suppose that

y(t) = c0 + c1t+ c2t2 + c3t

3 + c4t4 + c5t

5 + c6t6 + c7t

7 + c8t8 + c9t

9 + c10t10 + . . . (6.2)

using initial conditions, we get c0 = 1 and c1 = 0. By substituting c0 = 1 and c1 = 0

in (6.2) we get

y(t) = 1 + c2t2 + c3t

3 + c4t4 + c5t

5 + c6t6 + c7t

7 + c8t8 + c9t

9 + c10t10 + . . . (6.3)

72

Page 84: Analysis and Implementation of Numerical Methods for

Taking derivatives of the equation (6.3), we have,

y′(t) = 2c2t+ 3c3t2 + 4c4t

3 + 5c5t4 + 6c6t

5 + 7c7t6 + 8c8t

7 + 9c9t8 + 10c10t

9 + . . .

y′′(t) = 2c2 + 6c3t+ 12c4t2 + 20c5t

3 + 30c6t4 + 42c7t

5 + 56c8t6 + 72c9t

7 + 90c10t8 + . . .

Now using the binomial expansion,

yn(t) =(1 + c2t2 + c3t

3 + c4t4 + c5t

5 + c6t6 + c7t

7 + c8t8 + c9t

9 + c10t10 + . . . )n

=1 + nc2t2 + nc3t

3 + (nc4 +n(n− 1)

2)t4 + [nc5 +

n(n− 1)

22c2c3]t5

+[nc6 +n(n− 1)

2(c2

3 + 2c2c4) +n(n− 1)(n− 2)

6c3

2]t6 + [nc7 +n(n− 1)

2(2c3c4

+2c2c5) +n(n− 1)(n− 2)

63c2

2c3]t7 + [nc8 +n(n− 1)

2(c2

4 + 2c3c5 + 2c2c6)

+n(n− 1)(n− 2)

6(3c2c

23 + 3c2

2c4) +n(n− 1)(n− 2)(n− 3)

24c4

2]t8

+[nc9 +n(n− 1)

2(2c4c5 + 2c3c6 + 2c2c7) +

n(n− 1)(n− 2)

6(c3

3 + 6c2c3c4

+3c22c5) +

n(n− 1)(n− 2)(n− 3)

244c3

2c3]t9 + [nc10 +n(n− 1)

2(c2

5 + 2c4c6

+2c3c7 + 2c2c8) +n(n− 1)(n− 2)

6(3c2

3c4 + 3c2c24 + 6c2c3c5 + 3c2

2c6)

+n(n− 1)(n− 2)(n− 3)

24(6c2

2c23 + 4c3

2c4)

+n(n− 1)(n− 2)(n− 3)(n− 4)

120c5

2]t10 + . . .

73

Page 85: Analysis and Implementation of Numerical Methods for

Using y′(t),y′′(t) and yn(t) in equation (6.1) we get,

(6c2 + 1) + 10c3t+ [20c4 + nc2]t2 + [30c5 + nc3]t3 + [42c6 + nc4+

n(n− 1)

2c2

2]t4 + [56c7 + nc5 + c2c3n(n− 1)]t5 + [72c8 + nc6+

n(n− 1)

2(c2

3 + 2c2c4) +n(n− 1)(n− 2)

6c3

2]t6 + [90c9 + nc7 +n(n− 1)

2(2c3c4

+2c2c5) +n(n− 1)(n− 2)

2c2cc3]t7 + [110c10 + nc8 +

n(n− 1)

2(c2

4 + 2c3c5 + 2c2c6)

+n(n− 1)(n− 2)

2(c2c

23 + c2

2c4) +n(n− 1)(n− 2)(n− 3)

24c4

2]t8 + · · · = 0

Equating the corresponding coefficients from both sides we obtain,

c3 = c5 = c7 = c9 = 0, c2 = − 1

3!, c4 =

n

5!, c6 = −n

2

7!− n(n− 1)

(84)(3!)2,

c8 =n3

9!+

n2(n− 1)

(72)(84)(3!)2+

n2(n− 1)

(72)(3!)(5!)+n(n− 1)(n− 2)

(72)(6)(3!)3, c10 = − n

4

11!−

n3(n− 1)

(110)(72)(84)(3!)2− n3(n− 1)

(72)(110)(3!5!)− n2(n− 1)(n− 2)

(110)(72)(3!)4− n(n− 1)

220

(n2

5!2

+n2

(3)(7!)+

n(n− 1)

(3)(84)(3!2)

)− n2(n− 1)(n− 2)

(220)(3!25!)− n(n− 1)(n− 2)(n− 3)

(2640)(3!4)

∴ y(t) = 1− 1

3!t2 +

n

5!t4 −

(n2

7!− n(n− 1)

(84)(3!)2

)t6 +

(n3

9!+

n2(n− 1)

(72)(84)(3!)2+

n2(n− 1)

(72)(3!)(5!)+n(n− 1)(n− 2)

(72)(6)(3!)3

)t8 −

(n4

11!− n3(n− 1)

(110)(72)(84)(3!)2−

n3(n− 1)

(72)(110)(3!5!)+n2(n− 1)(n− 2)

(110)(72)(3!)4− n(n− 1)

220

(n2

5!2+

n2

(3)(7!)+

n(n− 1)

(3)(84)(3!2)

)

−n2(n− 1)(n− 2)

(220)(3!25!)− n(n− 1)(n− 2)(n− 3)

(2640)(3!4)

)t10 + . . . (6.4)

74

Page 86: Analysis and Implementation of Numerical Methods for

Now for n=1,

y(t) = 1− 1

3!t2 +

1

5!t4 − 1

7!t6 +

1

9!t8 − 1

11!t10 + · · · = sint

t

For n = 1 it is easy to find a closed form solution but it is not known for all

values of n. So, except few n values we do not have exact solution of the problem.

And series solution is not practical (requires a very large number of terms for rea-

sonable accuracy). Therefore, we will use numerical difference methods to get the

approximate solutions of the problem.

6.2. Numerical Results for Second Order Singular Differential Equations

In this section we discuss details about Lane-Emden equation which is a second

order singular differential equation and appropriate numerical methods for this types

of problem. Now if singularity occurs at the left end point or at initial point then we

cannot use a method for which we need to evaluate the functional value at this point

and consequently we cannot use an explicit method. Also, it is not possible to use

an implicit method that contains a term involves functional evaluation at singular

point. Therefore, we even cannot use all the implicit methods as well. We use an

implicit method that does not involve functional evaluation at left endpoint. For

example, the Trapezoidal method cannot be used for this problem. But backward

Euler can be used and we will use it to solve the Lane-Emden equation. We note

that in order to use backward Euler method we need to convert the second order IVP

into a first order system. An alternative approach involves application of a direct75

Page 87: Analysis and Implementation of Numerical Methods for

technique to the second order problem without converting the problem to a system.

Nyström methods are examples of such techniques, and we will illustrate the use of

one such method here. We will show some insight how this method developed and

will apply it to the Lane-Emden equation. The order of Nyström methods is 2 or

higher and we will use the second order Nyström method to solve our problems. In

fact Lane-Emden is a family of ODEs for different values of n and we do not know

the exact solution except for a few n values. By using one of these exact solutions we

will verify the order of the Nyström method and for some other n values for which

we do not know the exact solution we will compare the results from backward Euler

method, Nyström method to some other results which are discussed by M. Beech in

[3] and W. Flower, F. Hoyle in [11].

6.2.1. Nyström Method. Nyström methods have been developed by Nyström

[1925] and according to [12] it is given in his fundamental memoir on the numerical

solution of differential equations. An O(hs) error for (s− 1)-stage Nyström method

was established for certain singular IVPs (including Lane-Emden) by Chawla et

al (1990) [10] and extended to a broader class by Benko et al (2008) [5]. In this

section we will show how Nyström method works on Lane-Emden equation. From

the equation (6.1), we have

y′′(t) +2

ty′(t) + yn(t) = 0, (6.5)

Suppose f(t, y) = yn and then equation(6.5) becomes

y′′(t) +2

ty′(t) + f(t, y) = 0, (6.6)

76

Page 88: Analysis and Implementation of Numerical Methods for

Since (ty)′ = ty′ + y and so (ty)′′ = ty′′ + 2y′ and therefore equation (6.6) takes the

form

(ty)′′ + tf(t, y) = 0 (6.7)

Let us suppose that χ(t) = ty and then equation (6.7) becomes

χ′′(t) = F (t, χ) (6.8)

where, F (t, χ) = −tf(t, y). As outlined in [5], the Nyström method is given by

(with Z = χ′(t), orZ = (ty)′ in our case)

χn+1 = χ+ hZn + h2

s−1∑j=1

ajKj + tn(h)

Zn+1 = Zn + hs−1∑j=1

bjKj + t′n(h)

(6.9)

with for each n = 0, 1, . . . , N − 1,

Ki = F (tn,+αih, χn + αihZn + h2

i−1∑j=1

βijKj) (6.10)

for i = 1, 2, . . . , s− 1 and

tn(h), t′n(h) = O(hs+1)

where aj, bj, αi, and βij are specified constants.

77

Page 89: Analysis and Implementation of Numerical Methods for

Let us rewrite equations (6.9) and (6.10) in terms of the original variable y

instead of χ. So, equation (6.9) becomes

tn+1yn+1 = tnyn + hZn + h2

s−1∑j=1

ajKj + tn(h)

Zn+1 = Zn + hs−1∑j=1

bjKj + t′n(h)

(6.11)

As tn = nh, equation (6.11) may be rewritten as

(n+ 1)hyn+1 = nhyn + hZn + h2

s−1∑j=1

ajKj + tn(h)

Zn+1 = Zn + hs−1∑j=1

bjKj + t′n(h)

and dividing the first equation above by h we get,

(n+ 1)yn+1 = nyn + Zn + hs−1∑j=1

ajKj + tn(h)

Zn+1 = Zn + hs−1∑j=1

bjKj + t′n(h)

(6.12)

where, tn(h) = O(hs).

78

Page 90: Analysis and Implementation of Numerical Methods for

Now let us rewrite equation (6.10) in terms of t and y,

Ki = F (tn + αih, χn + αihZn + h2

i−1∑j=1

βijKj)

= −(tn + αih)f

(tn + αih,

χn + αihZn + h2∑i−1

j=1 βijKj

tn + αih

)

= −(tn + αih)f

(tn + αih,

tnyn + αihZn + h2∑i−1

j=1 βijKj

tn + αih

)

= −(tn + αih)f

(tn + αih,

nhyn + αihZn + h2∑i−1

j=1 βijKj

nh+ αih

)

= −(tn + αih)f

(tn + αih,

nyn + αiZn + h∑i−1

j=1 βijKj

n+ αi

)

(6.13)

So, finally we can rewrite equation (6.12) using equation (6.13) as

(n+ 1)yn+1 = nyn + Zn + hs−1∑j=1

ajKj + tn(h)

Zn+1 = Zn + hs−1∑j=1

bjKj + t′n(h)

(6.14)

with

Ki = −(tn + αih)f

(tn + αih,

nyn + αiZn + h∑i−1

j=1 βijKj

n+ αi

)(6.15)

This is consistent with the form of the method in [10]. Now if we drop the error

terms from equations (6.14) and (6.15) and replace the exact values of yn, Zn, Kj

79

Page 91: Analysis and Implementation of Numerical Methods for

with their approximations yn, Zn, Kj, then we obtain

(n+ 1)yn+1 = nyn + Zn + h

s−1∑j=1

ajKj

Zn+1 = Zn + h

s−1∑j=1

bjKj

(6.16)

with

Ki = −(tn + αih)f

(tn + αih,

nyn + αiZn + h∑i−1

j=1 βijKj

n+ αi

)(6.17)

Now for our case which is the simplest case of s = 2 (1-stage, order 2 method), we

have a1 = 12, b1 = 1, and α1 = 1

2and the Nyström method takes the form

(n+ 1)yn+1 = nyn + Zn +h

2K1

Zn+1 = Zn + hK1

(6.18)

with

K1 = −(tn +h

2)f(tn +

h

2,nyn + 1

2Zn

n+ 12

)(6.19)

Example 6.2.1. Consider the Lane-Emden equation for n = 1

y′′(t) +2

ty′(t) + y(t) = 0, t ∈ (0, 1]

y(0) = 1, y′(0) = 0,

The exact solution of this initial value problem is y(t) = sin tt

and at t = 1, y(1) =

0.84147098480790. We use this exact solution to verify the order (see Table 6.2.1) of

Nystrom method of order 2. Table 6.2.1 provides experimental evidence in support

of 1-stage Nystrom method being of order 2.80

Page 92: Analysis and Implementation of Numerical Methods for

Table 6.2.1. Order of Error p = ln|E(2h)|−ln|E(h)|ln2

for y′′(t)+ 2ty′(t)+

y = 0, t ∈ (0, 1] with y(0) = 1, y′(0) = 0 and exact solution y(t) =sin tt

for Nystrom method.

No ofsteps

Approximatesolution Error Order

1 0.75000000000000 0.091470984807902 0.82031250000000 0.02115848480790 2.112077874 0.83627414703369 0.00519683777421 2.025530398 0.84017739020315 0.00129359460474 2.0062484616 0.84114793432060 0.00032305048730 2.0015540132 0.84139024390356 0.00008074090434 2.0003880064 0.84145080093849 0.00002018386941 2.00009697128 0.84146593892533 5.04588257033×10−6 2.00002424256 0.84146972334255 1.26146534341×10−6 2.00000606512 0.84147066944189 3.15366004089×10−7 2.000001521024 0.84147090596641 7.88414816766×10−8 2.000000352048 0.84147096509753 1.97103703359×10−8 2.000000014096 0.84147097988030 4.92760110493×10−9 1.999997518192 0.84147098357600 1.23190058154×10−9 1.9999996416384 0.84147098449991 3.07982417347×10−10 1.9999659432768 0.84147098473088 7.70143948614×10−11 1.9996479665536 0.84147098478865 1.92434956858×10−11 2.00075723

Example 6.2.2. For the Lane-Emden equation with n = 3

y′′(t) +2

ty′(t) + y3(t) = 0, t ∈ (0, 1]

y(0) = 1, y′(0) = 0,

an approximate closed form solution has been derived by M. Beech [3], which is

y(t) = sech(

t√3

). In Table 6.2.2 we compare our approximate results from backward

Euler method(Benko et al (2009) [4] has the same experimental approximation for

BEM) and Nystrom method to M. Beech’s approximation for h = 0.025.81

Page 93: Analysis and Implementation of Numerical Methods for

Table 6.2.2. Comparison of the backward Euler method and Nys-trom method with Beech’s approximation for Lane-Emden equationfor n = 3 and h = 0.025.

t Backward Euler Nystrom method Beech’s ap-proximation

0 1.00000 1.00000 1.000000.1 0.99792 0.99829 0.998340.2 0.99257 0.99332 0.993370.3 0.98403 0.98515 0.985190.4 0.97248 0.97391 0.973910.5 0.95811 0.95979 0.959730.6 0.94116 0.94302 0.942860.7 0.92190 0.92387 0.923550.8 0.90061 0.90262 0.902060.9 0.87758 0.87957 0.878681.0 0.85312 0.85501 0.85372

Example 6.2.3. For the nonlinear Lane-Emden equation with n = 1.5

y′′(t) +2

ty′(t) + y1.5(t) = 0

y(0) = 1, y′(0) = 0

we do not know the closed form exact solution but in [11] by W. Fowler, F. Hoyle.

an approximate closed form solution y(t) = e−t2

6 is derived. In Table 6.2.3, we have

compared our results from backward Euler method and Nystrom method with this

approximation for h = 0.025.82

Page 94: Analysis and Implementation of Numerical Methods for

Table 6.2.3. Comparison of backward Euler method and Nystrommethod with Fowler and Hoyle’s approximation for Lane-Emden equa-tion for n = 1.5 and h = 0.025 .

t Backward Euler Nystrom methodFowler andHoyle’s ap-proximation

0 1.00000 1.00000 1.000000.1 0.99792 0.99828 0.998340.2 0.99253 0.99330 0.993360.3 0.98389 0.98505 0.985110.4 0.97208 0.97360 0.973690.5 0.95721 0.95905 0.959190.6 0.93940 0.94154 0.941770.7 0.91883 0.92120 0.921580.8 0.89566 0.89822 0.898830.9 0.87009 0.87279 0.893721.0 0.84233 0.84512 0.84648

Results using backward Euler method for last two test problems were also

obtained by Benko et al(2009) [4], and are in agreement with backward Euler method

results generated here. The observed order is as theoretically established in [4] for

the BEM on problems of this type. The Nyström results are generally in greater

agreement with the approximate closed form solutions as it is higher order method.

83

Page 95: Analysis and Implementation of Numerical Methods for

CHAPTER 7

Conclusion and Future Work

We can easily say that numerical methods are very important for mathematics.

Differential equations have a vital role in the area of applied sciences and engineer-

ing. A wide variety of problems arises from population dynamics, astrophysics, fluid

dynamics, modeling biology, engineering, and other areas. Although some of these

equations can be solved analytically, numerical methods are also very useful and are

necessary if an exact solution cannot be obtained analytically. For these types of

problems numerical methods are the only options to approximate the solution. They

are very important and powerful tools to solve linear or nonlinear ordinary differen-

tial equations and also for systems of differential equations. In this thesis, our focus

was to use numerical methods to solve initial value problems for ODE’s and provide

some ideas about PDE’s as well. We started our thesis with very basic concepts of

ordinary differential equations and we discussed the nature of many real life appli-

cation problems such as logistic differential equation, Lotka-Volterra predator-prey

equation of population dynamics, Lane-Emden equations of astrophysics, harmonic

oscillator of classical mechanics. We have also presented how to discretize partial

differential equations using semi-discretization and nature of the discretized system

of ODE’s. We have discretized homogeneous and nonhomogeneous heat equations

for different types of initial and boundary conditions.

84

Page 96: Analysis and Implementation of Numerical Methods for

We have tried to present fundamental concepts on different kinds of numerical

methods from basic ones to complicated ones and discussed the difference between

explicit methods and implicit methods, and also methods with different order such

first order, second order and fourth order methods, and discussed the advantages

and disadvantages of different kind of methods based on order of the methods or

explicit and implicit. We focused on how important the implicit methods are for

stiff differential equations or stiff systems. More importantly, we compared the re-

sults from explicit methods and implicit methods and methods with different order

using graphs of the approximate solution with the analytical solution if it is known.

We have shown there are some problems where we cannot use explicit methods and

even many implicit methods are also not applicable. One of these problems was the

Lane-Emden equation which is a singular second order differential equation and an

implicit method with functional evaluation at very first step at left hand end point

is not applicable. For Lane-Emden equation we have used backward Euler method,

which is a first order method, and second order Nyström method. Stiffness of dif-

ferential equations and stability of the methods have been discussed in the thesis.

We showed how difficult it is to handle a stiff problem by using numerical methods

and consequently we used combination of two methods where one is very useful for

stiff problems and another method that works economically for a non-stiff problems

switching back and forth with variable step sizes. We used Mathematica software

to handle this issue. Finally, we exhibited the relation of the step size and order

of the method, for methods with different order. In particular we used errors of a

85

Page 97: Analysis and Implementation of Numerical Methods for

method to approximate the expected order of the method.

Future work could include considering the use of some of these numerical meth-

ods on broader classes of differential equations, especially ones involving different

types of singularities.

86

Page 98: Analysis and Implementation of Numerical Methods for

APPENDIX A

Coding of Various Numerical Methods

[a] Code of backward Euler method for a first order differential equationdydt

=f(t, y) using Mathematica Software.

f [t_, y_] := f(t, y);

g[t_, y_] := fy(t, y);

y0[t_] := y(t);

a = Input["Input the value of a"];b = Input["Input the value of b"];h = Input["Input step size"];

M =(b− a)

hMM = 10;

tol = 10−6;

w = α;

t = 0;

i = 0;

Print["Resutl from successive iterations"];Print[" i ", " ","ti", " ", "wi" ,"y[ti]"," ", "Error"];

Print[PaddedForm[i,5], PaddedForm[t,{5,3}] ,PaddedForm[w,{15,12}],PaddedForm[y0[t], {15, 12}]];j = 1;

Do[i = i+ 1;

k1 = w;

87

Page 99: Analysis and Implementation of Numerical Methods for

w0 = k1;

COUNTER =0;

j = 1;

While[COUNTER<1,

w = w0 −w0 − hf [t+ h,w0]− k1

1− hg[t+ h,w0];

If[j> MM, Break[],w0 = w]

];

t = a+ ih;

Print[PaddedForm[i,5], PaddedForm[t,{5,3}] ,PaddedForm[w,{15,12}],PaddedForm[y0[t],{15,12}],PaddedForm[Abs[w-y0[t]]]];

w0 = w;

{M}]

[b] Code of Implicit Trapezoidal method for a first order differential equationdydt

=f(t, y) using Mathematica Software.

f [t_, y_] := f(t, y);

g[t_, y_] := fy(t, y);

y0[t_] := y(t);

a = Input["Input the value of a"];b = Input["Input the value of b"];h = Input["Input step size"];

M =(b− a)

hMM = 10;

tol = 10−6;

w = α;

t = 0;

i = 0;

Print["Resutl from successive iterations"];

88

Page 100: Analysis and Implementation of Numerical Methods for

Print[" i ", " ","ti", " ", "wi" ,"y[ti]"," ", "Error"];

Print[PaddedForm[i,5], PaddedForm[t,{5,3}] ,PaddedForm[w,{15,12}],PaddedForm[y0[t], {15, 12}]];j = 1;

Do[i = i+ 1;

k1 = w +h

2f [t, w];

w0 = k1;

COUNTER =0;

j = 1;

While[COUNTER<1,

w = w0 −w0 − h

2f [t+ h,w0]− k1

1− h2g[t+ h,w0]

;

If[j> MM, Break[],w0 = w]

];

t = a+ ih;

Print[PaddedForm[i,5], PaddedForm[t,{5,3}] ,PaddedForm[w,{15,12}],PaddedForm[y0[t],{15,12}],PaddedForm[Abs[w-y0[t]]]];

w0 = w;

{M}]

[c] Code of implicit trapezoidal method for a system of two equations usingMathematica software.

f1[t_, x_, y_] := f1(t, x, y);f2[t_, x_, y_] := f2(t, x, y);g11[t_, x_, y_] := f1x(t, x, y);g12[t_, x_, y_] := f1y(t, x, y);g21[t_, x_, y_] := f2x(t, x, y);g22[t_, x_, y_] := f2y(t, x, y);a=Input["Input the value of a"];b=Input["Input the value of b"];h=Input["Input step size"];M=(b-a)/h;MM=10;tol=10−6;u=x0;

89

Page 101: Analysis and Implementation of Numerical Methods for

v=y0;t=0;i=0;Print["Result from successive iterations"]Print["i"," ","ti"," ","ui"," "," wi "];Print[i," ",t," ",u, ", v];j=1;Do[i=i+1;k1=u+h

2f1[t,u,v];

k2=v+h2f2[t,u,v];

u0=k1;v0=k2;COUNTER=0;j=1;While[COUNTER<1,u=u0- ((1−h

2∗g22[t,u0,v0])∗(u0−k1−h

2∗f1[t,u0,v0])+h

2∗g12[t,u0,v0]∗(v0−k2−h

2∗f2[t,u0,v0]))

(1−h2∗g11[t,u0,v0]∗(1−h

2∗g22[t,u0,v0])−h2

4∗g12[t,u0,v0]∗g21[t,u0,v0])

;

v=v0- ((1−h2∗g11[t,u0,v0])∗(v0−k2−h

2∗f2[t,u0,v0])+h

2∗g21[t,u0,v0]∗(u0−k1−h

2∗f1[t,u0,v0]))

(1−h2∗g11[t,u0,v0]∗(1−h

2∗g22[t,u0,v0])−h2

4∗g12[t,u0,v0]∗g21[t,u0,v0])

;

If[Abs[Abs[u0]-Abs[u]]<tol,COUNTER=1,j=j+1];If[j>MM,Break[],u0=u] && If[j> MM ,Break[],v0=v]];t=a+i*h; Print[i," ",t," ",u," ",v];u0=u;v0=v,{M}];

[d] Code of backward Euler method for a system of two equations convertedfrom a second order differential equation using Mathematica software.

f1[t_, x_, y_] := f1(t, x, y);f2[t_, x_, y_] := f2(t, x, y);g11[t_, x_, y_] := f1x(t, x, y);g12[t_, x_, y_] := f1y(t, x, y);g21[t_, x_, y_] := f2x(t, x, y);g22[t_, x_, y_] := f2y(t, x, y);g[t] = exact solution;Array[array,500];a=Input["Input the value of a"];b=Input["Input the value of b"];h=Input["Input the step size"];M=(b-a)/h;

90

Page 102: Analysis and Implementation of Numerical Methods for

MM=10;tol=10−6;u=x0;array[0]=u;v=y0;t=0;i=0;Print["Result from successive iterations"];Print["i"," ","ti"," ","ui"," "," vi ","exact solution", " Er-ror"];Print[PaddedForm[i,3],PaddedForm[t,{6,4}],PaddedForm[u,{10,6}],PaddedForm[v,{10,6}],PaddedForm[g[0],{17,12}],PaddedForm[Abs[u-g[0]],{15,12}];j=1;Do[i=i+1;k1=u;k2=v;u0=k1;v0=k2;COUNTER=0;j=1;While[COUNTER<1,u=u0- ((1−h∗g22[t+h,u0,v0])∗(u0−h∗f1[t+h,u0,v0]−k1)+h∗g12[t+h,u0,v0]∗(v0−h∗f2[t+h,u0,v0]−k2))

(1−h∗g11[t+h,u0,v0]∗(1−h∗g22[t+h,u0,v0])−h2∗g12[t+h,u0,v0]∗g21[t+h,u0,v0]);

v=v0-h∗g21[t+h,u0,v0]∗(u0−h∗f1[t+h,u0,v0]−k1)+(1−h∗g11[t,u0,v0])∗(v0−h∗f2[t+h,u0,v0]−k2)(1−h∗g11[t+h,u0,v0])∗(1−h∗g22[t+h,u0,v0])−h2∗g12[t+h,u0,v0]∗g21[t+h,u0,v0])

;If[Abs[Abs[u0]-Abs[u]]<tol,COUNTER=1,j=j+1];If[j>MM,Break[],u0=u] && If[j> MM ,Break[],v0=v]];array[i]=u;t=a+i*h;Print[PaddedForm[i,3],PaddedForm[t,{6,4}],PaddedForm[array[i],{10,6}],PaddedForm[v,{10,6}],PaddedForm[g[t],{17,12}],PaddedForm[Abs[array[i]-g[t]],{15,12}];u0=u;v0=v,{M}];

[e] Code of Nyström method for singular second order Lane-Emden equationusing Mathematica software.

m=Input["Input the value of m"];f1[t_, y_] := ym;Array[timee,1000];Array[approxx,1000];

91

Page 103: Analysis and Implementation of Numerical Methods for

timee[0]=0;approxx[0]=1.0;t=0;i=0;a=Input["Input the value of a"];b=Input["Input the value of b"]h=Input["Input step size"];M= (b-a)

h;

n=0;y0=Input["Input the value of y(a)"];z0=Input["Input the value of y′(a)"];Print["Approximate result for Nyström method for m",m];Print[" i ", " t ", " approximate slution "];Print[PaddedForm[i,4],PaddedForm[t,{6,4}],PaddedForm[[y0,{15,10}]];Do[i=i+1;k=-(t+h

2) ∗ f [t+ h

2, n∗y0+0.5∗z0

n+0.5];

y=n∗y0+z0+0.5∗h∗kn+1

z=z0+h*k;y0=y;z0=z;t=a+i*h;timee[i]=t;approxx[i]=y0;n=n+1;Print[PaddedForm[i,4],PaddedForm[timee[i],{6,4}],PaddedForm[approxx[i],{15,10}]],{M}];

[f] Code of backward Euler method to verify the order of the method usingMathematica software.

f [t_, y_] := f(t, y);g[t_, y_] := fy(t, y);y0[t_] := y(t);Array[Error,100];Array[Approx,100];Array[order,100];order[0]=0;a = Input["Input the value of a"];b = Input["Input the value of b"];d=0;Print["Result from the successive iterations"];Print[" i ,"," t "," w "," y[t] "," Error "," order "];repeat=Input["Enter no of repetition"];

92

Page 104: Analysis and Implementation of Numerical Methods for

Do[h=1.0

2d;

M = (b-a)h

MM = 10;tol=Input["Input tolerance"];w = Input["Input α"];t = 0;i = 0;Do[i=i+1;k1=w;w0=k1;COUNTER=0;j=1;While[COUNTER <1,w=w0-w0−h∗f [t+h,w0]−k1

1−h∗g[t+h,w0];

If[Abs[Abs[w0]-Abs[w]]<tol,COUNTER=1,j=j+1];If[j>MM,Break[],w0=w]];t=a+i*h;w0=w,{M};approx[d]=w;Error[d]=Abs[approx[d]-y0[t]];d=d+1,{repeat}];dd=0;Do[order[dd+1]= Log[Abs[Error[dd]]]-Log[Abs[Error[dd+1]]]

Log[2] ;dd=dd+1,{d}];cc=0;Do[Print[PaddedForm[2cc, 5],PaddedForm[t,{5,3}],PaddedForm[approx[cc],{20,14}],PaddedForm[y0[t],{20,14}],PaddedForm[Error[cc],{15,12}],PaddedForm[order[cc],{20,10}]];cc=cc+1,{d-1}]

[g] Code of Runge Kutta method to find approximate solution, Error and verifythe expected order of the method by using Mathematica software.

f [t_, y_] := f(t, y);y0[t_] := y(t);

93

Page 105: Analysis and Implementation of Numerical Methods for

Array[Error,100];Array[Approx,100];Array[order,100];order[0]=0;a = Input["Input the value of a"];b = Input["Input the value of b"];ynot=Input["Input the value of y(0)"];d=0;Print["Result from the successive iterations"];Print[" i ,"," t "," w "," y[t] "," Error "," order "];repeat=Input["Enter no of repetition"];Do[h=1.0

2d;

M = (b-a)h

w0 = ynot;t = 0;i = 0;Do[i=i+1;k1=h*f[t,w0];k2=h*f[t+h/2,w0+k1/2];k3=h*f[t+h/2,w0+k2/2];k4=h*f[t+h,w0+k3];w=w0+(k1+2*k2+2*k3+k4)/6;t=a+i*h;w0=w,{M}];approx[d]=w;Error[d]=Abs[approx[d]-y0[t]];d=d+1,{repeat}];dd=0;Do[order[dd+1]=Log[Abs[Error[dd]]]-Log[Abs[Error[dd+1]]]

Log[2];

dd=dd+1,{d}];cc=0; Do[Print[PaddedForm[2cc,5],PaddedForm[t,{8,3}],PaddedForm[approx[cc],{25,14}],PaddedForm[y0[t],{25,14}],PaddedForm[Error[cc],{25,14}],PaddedForm[order[cc],{20,8}]];cc=cc+1,{d-1}]

94

Page 106: Analysis and Implementation of Numerical Methods for

[h] Code of Implicit trapezoidal method for a differential equations that theexact solution is not given and we want to optimize the step size for approximatesolution by using Mathematica software.

f [t_, y_] := f(t, y);g[t_, y_] := fy(t, y);Array[barry,100];Array[order,100];order[0]=0;order[1]=0;a = Input["Input the value of a"];b = Input["Input the value of b"];Dos=Input["Input the number of repetition"];Print["Result from the successive iterations"];Print[" i "," t "," w "," order "];k=0;Do[h=1.0

2k;

M = (b-a)h

MM = 10;tol=Input["Input tolerance"];w = Input["Input α"];t = 1;i = 0;j=1;Do[i=i+1;k1=w+h

2∗ f [t, w];

k2=w;w0=k1;COUNTER=0;j=1;While[COUNTER <1,w=w0-w0−h

2∗f [t+h,w0]−k1

1−h2∗g[t+h,w0]

;If[Abs[Abs[w0]-Abs[w]]<tol,COUNTER=1,j=j+1];If[j>MM,Break[],w0=w]];t=a+i*h;w0=w,{M};barry[k]=w;k=k+1,{Dos}];tt=2;

95

Page 107: Analysis and Implementation of Numerical Methods for

Do[

order[tt]=Log

[Abs

[barry[tt-1]-barry[tt-2]barry[tt]-barry[tt-1]

]]Log[2] ;

tt=tt+1,{k-2}];z=0;Do[Print[PaddedForm[2z, 8],PaddedForm[t,{7,3}],PaddedForm[barry[z],{14,12}],PaddedForm[order[z],{8,6}]];z=z+1,{k}]

96

Page 108: Analysis and Implementation of Numerical Methods for

BIBLIOGRAPHY

[1] Abia L. M., Lopez-Marcos, J. C. and Martinez, J. (1996). Blow-up for semidis-cretizations of reaction-diffusion equations. Applied Numerical Mathematics,20(1-2), 145-156.

[2] Ascher, Uri. M. (2008) Numerical methods for Evolutionary Differential Equa-tions . Philadelphia: SIAM.

[3] Beech, M. (1987) An approximate solution for the polytrope n = 3.Astrophysicsand Space Science, 132(2),393-396.

[4] Benko, D., Biles, D. C., Robinson, M. P., & Spraker, J. S. (2009). Numericalapproximation for singular second order differential equations. Mathematicaland Computer Modeling, 49, 1109-1114.

[5] Benko, D., Biles, D. C., Robinson, M. P., & Spraker, J. S. (2008). Nyströmmethods and singular second order differential equations. Computers and Math-ematics with applications, 56(9), 1075-1980.

[6] Bhatia, R.(1997) Matrix Analysis. New York: Springer.

[7] Burden, R. L., and Faires, J. D. (2011). Numerical Analysis Ninth Edition.Boston, MA: Brooks/Cole, Cengage Learning.

[8] Burrage, K. and Butcher, J. C. (1979). Stability Criteria for Implicit Runge-Kutta Methods. Society for Industrial and Applied Mathematics, 16 (1), 46-57.

[9] Butcher, J. C. (2003). Numerical Methods for Ordinary Differential Equations .Chichester, West Sussex: John Wiley & Sons Ltd.

[10] Chawla, M.M., Jain, M. K. and Subramanian, R. (1990). The applicationof explicit Nyström method to singular second order differential equations.Computers and mathematics with applications, 19(12), 47-51.

[11] Fowler, William, A. & Hoyle, F. (1964). Neutrino processes and pair formationin massive stars and supernovae. Astrophysical Journal Supplement, 9, 201-319.

97

Page 109: Analysis and Implementation of Numerical Methods for

[12] Henrici, Peter. (1962) Discrete Variable Methods in Ordinary DifferentialEquations. New York: John Wiley & Sons, Inc.

[13] Hubbard, J. H. and West, B. H. (1991) Differential Equations: A DynamicalSystems Approach, Part1: Ordinary Differential Equations. New York: Springer.

[14] Lambert, J, D. (1991). Numerical methods for Ordinary Differential Systems.Chichester, West Sussex: John Wiley & Sons Ltd.

[15] Makinson, G. J. (1968). Stable high order implicit methods for the numerical so-lution of systems of differential equations. The Computer Journal 11(3),305-310.

[16] Nagle, R. Kent., Saff, Edward. B. and Snider, Arthur. David.(2012). Fundamen-tal of Differential Equations, Eighth Edition. Boston, MA: Pearson Education,Inc.

[17] Plato, Robert. (2003).Concise Numerical Mathematics. Providence, RhodeIsland: SIAM.

[18] Wolfram Research, Inc. Mathematica, Version 10.1, Champaign, IL (2015).

[19] Zill, D. G. (2013). A First Course in Differential Equations with Modeling Ap-plications Tenth Edition. Boston, MA: Brooks/Cole, Cengage Learning.

98