28
Notes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania Indiana, PA, 15705, USA E-mail: [email protected] http://www.math.iup.edu/~jchrispe February 22, 2017

Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Notes: Introduction toNumerical Methods

J.C. ChrispellDepartment of Mathematics

Indiana University of PennsylvaniaIndiana, PA, 15705, USA

E-mail: [email protected]://www.math.iup.edu/~jchrispe

February 22, 2017

Page 2: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

ii

Page 3: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Preface

These notes will serve as an introduction to numerical methods for scientific computing.From the IUP course catalog the course will contain:

Algorithmic methods for function evaluation, roots of equations, solutions tosystems of linear equations, function interpolation, numerical differentiation; anduse spline functions for cure fitting. Focus on managing and measuring errors incomputation. Also offered as COSC 250; either COSC 250 or MATH 250 maybe substituted for the other and may be used interchangeably for D or F repeatsbut may not be counted for duplicate credit.

Material presented in the course will tend to follow the presentation of Cheney and Kincaidin their text: Numerical Mathematics and Computing (seventh edition) [2]. Relevant coursematerial will start in chapter 1 of the text and selected chapters will be covered as time inthe course permits. I will supplement the Winston text with additional material from otherpopular books on numerical methods:

• Scientific Computing: An Introduction Survey by Heath [3]

• Numerical Analysis by Burden and Faires [1]

My Apologies in advance for any typographical errors or mistakes that are present in thisdocument. That said, I will do my very best to update and correct the document if I ammade aware of these inaccuracies.

-John Chrispell

iii

Page 4: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

iv

Page 5: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Contents

1 Introduction and Review 1

1.1 Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.1.1 Accurate and Precise . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.1.2 Horner’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.2 Floating Point Representation . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.3 Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.4 Taylor’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.4.1 Taylor’s Theorem using h . . . . . . . . . . . . . . . . . . . . . . . . 12

1.5 Gaussian Elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.5.1 Assessment of Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.6 Improving Gaussian Elimination . . . . . . . . . . . . . . . . . . . . . . . . . 16

2 Methods for Finding Zeros 19

2.0.1 Bisection Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.0.2 Newton’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3 Polynomial Interpolation 25

3.0.1 Highlights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4 Numerical Integration 27

4.1 Trapezoid Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4.1.1 Newton-Cotes Quadrature . . . . . . . . . . . . . . . . . . . . . . . . 31

4.1.2 Gaussian Quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

5 The Heat Equation 37

v

Page 6: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

5.1 Numerical Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

5.1.1 Taylors’s Theorem For Approximations . . . . . . . . . . . . . . . . . 39

5.1.2 Discritizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

5.2 Implicit Time Stepping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

5.2.1 Tri-Diagonal Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

5.3 Order of Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

6 Initial Value Problems 47

6.1 Second Order Runge-Kutta Method . . . . . . . . . . . . . . . . . . . . . . . 48

6.2 Fourth Order Runge-Kutta Method . . . . . . . . . . . . . . . . . . . . . . . 48

7 Appendices 51

Bibliography 53

vi

Page 7: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Chapter 1

Introduction and Review

“I have never listened to anyone who criticized my taste in space travel, sideshows or

gorillas. When this occurs, I pack up my dinosaurs and leave the room.”

−Ray Bradbury, Zen in the Art of Writing

What is Scientific Computing?

The major theme of this class will be solving scientific problems using computers. Many ofthe examples considered will be smaller parts that can be thought of as tools for implementingor examining larger computational problems of interest.

We will take advantage of replacing a difficult mathematical problem with simpler problemsthat are easier to handle. Using the smaller parts insight will be gained into the largerproblem of interest. In this class the methods and algorithms underlying computationaltools you already use will be examined.

Scientific Computing: Deals with computing continuous quantities in science and en-gineering (time, distance, velocity, temperature, density, pressure, stress) that can not besolved exactly or analytically in a finite number of steps. Typically we are numericallysolving problems that involve integrals, derivatives, and nonliterary.

Numerical Analysis: An area of mathematics where concern is placed on the design andimplementation of algorithms to solve scientific problems.

In general for solving a problem you will:

• Develop a model (expressed by equations) for a phenomenon or system of interest.

• Find/Develop an algorithm to solve the the system.

• Develop a Computational Implementations.

1

Page 8: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

• Run your implementation.

• Post process your results (Graphs Tables Charts).

• Interpret validate your results.

Problems are well posed provided:

1. A solution to the problem of interest exists.

2. The solution is unique.

3. The solution depends continuously on the data.

The last item here is important as problems that are ill conditioned have large changesin output with small changes in the initial conditions or data. This can be troubling fornumerical methods, and is not always avoidable.

In general we will use some standard techniques to attack problems presented. Replacing anunsolvable problem by a problem that is “close to it” in some sense and then looking at theclosely related solution.

• Replace infinite dimensional spaces with finite ones.

• Infinite processes with Finite processes:

– Integrals with Sums– Derivatives with Finite Differences

• Nonlinear Problems with Linear Ones

• Complicated Functions with Simple Ones (polynomials).

• General Matrices with Simpler Matrices.

With all of this replacement and simplification the sources of error and approximation needto be accounted for. How good is the approximated solution?

Significant Digits

The significant digits in a computation start with the left most non zero digit in a compu-tation, and end with the right most correct digit (including final zeros that are correct).

Example: Lets consider calculating the surface area of the Earth.

• The area of a sphere is:A = 4πr2

2

Page 9: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

• The radius of the Earth (r ≈ 6370 km).

• The value for π ≈ 3.141592653 rounded at some point.

• The numerical computation will be rounded at some point.

• All of these assumptions will come into play at some point.

• How many digits are significant?

Figure 1.0.1: Here the intersection of two parallel lines is compared with an error range givenof size ε. Note the closer the two lines are to parallel the more ill conditioned finding theintersection will become.

www.math.iup.edu/~jchrispe/MATH_250/eps_error.html

Example: Consider solving the following system of equations.

0.1243x+ 0.2345y = 0.8723

0.3237x+ 0.5431y = 0.9321

However you can only keep three significant digits.

Keeping only three significant digits in all computations an answer of

x ≈ −29.0 and y ≈ 19.0

Solving the problem using sage:

x ≈ −30.3760666260334 and y ≈ 19.8210877680851

Note that the example in the Cheney text is far more dramatic, and the potential for errorwhen truncating grows dramatically if the two lines of interest are nearly parallel.

3

Page 10: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

1.1 Errors

If two values are considered one taken to be true and the other an approximation then theError is given by:

Error = True− Approximation

The Absolute Error of using the Approximation is

Absolute Error =∣∣∣∣True− Approximation

∣∣∣∣and we denote

Relative Error =

∣∣∣∣True− Approximation∣∣∣∣∣∣∣∣True∣∣∣∣

• The Relative Error is usually more useful than the Absolute Error.

• The Relative Error is not defined if the true value we are looking for is zero.

Example: Consider the case where we are approximating and have:

True = 12.34 and Approximation = 12.35

Here we have the following:

Error = −0.01Absolute Error = 0.01

Relative Error = 0.0008103727714748612

Note that the approximation has 4 significant digits.

Example: Consider the case where we are approximating and have:

True = 0.001 and Approximation = 0.002

Here we have the following:

Error = −0.001Absolute Error = 0.001

Relative Error = 1

Here relative error is a much better indicator of how well the approximation fits the truevalue.

4

Page 11: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

1.1.1 Accurate and Precise

When a computation is accurate to n decimal places then we can trust n digits to the rightof the decimal place. Similar when a computation is said to be accurate to n significantdigits then the computation is meaningful for n places beginning with the leftmost nonzerodigit given.

• The classic example here is a meter stick. The user can consider it accurate to thelevel of graduation on the meter stick.

• A second example would be the mileage on your car. It usually displays in tenth ofa mile increments. You could use your car to measure distances accurate withing twotenths of a mile.

Precision is a different game. Consider adding the following values:

3.4 + 5.67 = 9.07

The second digit in 3.4 could be from rounding any of the following:

3.41, 3.4256, 3.44, 3.36, 3.399, 3.38

to two significant digits. So there can only be two signifigant digits in the answer. Theresults from multiplication and division can be even more misleading.

Computers will in some cases allow a user to decide if they would like to use rounding orchopping. Note there may be several different schemes for rounding values (especially whenit comes to rounding values ending with a 5).

1.1.2 Horner’s Algorithm

In general it is a good idea to complete most computation using a minimum number offloating point operations. Consider evaluating polynomials. For example given

f(x) = a0 + a1x+ a2x2 + · · · an−1xn−1 + anx

n

It would not be wise to compute x2, then x3 and so on. Writing the polynomial as:

f(x) = a0 + x(a1 + x(a2 + x(· · ·x(an−1 + x(an)) · · · ))

will efficiently evaluate the polynomial without ever having to use exponentiation. Noteefficient evaluation of polynomials in this is Horner’s Algorithm and is accomplishedusing synthetic division.

5

Page 12: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

1.2 Floating Point Representation

Numbers when entered into a computational machine are typically broken into two parts:

• An integer portion.

• A fractional portion.

with these two parts being separated by a decimal point.

123.456, 0.0000123

A second form that is used is normalized scientific notation , or normalized floating-pointrepresentation.

Here the decimal point is shifted so it is written as a fraction multiplied by some power of10. Note the leading digit of the fraction is nonzero.

0.0000123 =⇒ 0.123× 10−5

Any decimal in the floating point system may be written in this manner.

x = ±0.d1d2d3 . . .× 10n

with d1 not equal to zero.

More generally we write

x = ±r × 10n with(

1

10≤ r < 1

).

Here r is the mantissa and n is the exponent. If we are looking at numbers in a binarysystem then

x = ±q × 2n with(1

2≤ q < 1

).

Computers work exactly like this; however, on a computer we have the issue of needing touse finite word length (No more “. . .”).

This means a couple of things:

• No representation for irrational numbers.

• No representation for numbers that do not fit into a finite format.

6

Page 13: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

Activity

Numbers that can be expressed on a computer are called its Machine Numbers, and theyvery depending on the computational system being used. If we consider a binary computa-tional system where numbers must be expressed using ‘normalized’ scientific notation in theform:

x = ±(0.b1b2b3)2 × 2±k

where the values ofb1, b2, b3, and k ∈ {0, 1}.

what are all the possible numbers in this computational system?

What additional observations can be made about the system?

We shall consider here only the positive numbers:

(0.100)2 × 2−1 =1

4

(0.100)2 × 2−1 = 14

(0.100)2 × 20 = 12

(0.100)2 × 21 = 1

(0.101)2 × 2−1 = 516

(0.101)2 × 20 = 58

(0.101)2 × 21 = 54

(0.110)2 × 2−1 = 38

(0.110)2 × 20 = 34

(0.110)2 × 21 = 32

(0.111)2 × 2−1 = 716

(0.111)2 × 20 = 78

(0.111)2 × 21 = 74

• Note there is a hole in the number system near zero.

• Note there is also uneven spacing of the numbers we do have.

• Numbers smaller than the smallest representable number are considered underflow andtypically treated as zero.

• Numbers larger than the largest representable number are considered overflow and willtypically through an error.

7

Page 14: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

For number representations on computers we actually use the IEEE-754 standard has beenaccepted.

Precision Bits Sign Exponent MantissaSingle 32 1 8 23Double 64 1 11 52Long Double 80 1 15 64

Note that223 ≈ 1× 10−7

252 ≈ 1× 10−16

264 ≈ 1× 10−20

gives us the ballpark for machine precision when a computation is done using a given numberof bits.

8

Page 15: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

1.3 Activities

The limite = lim

n→∞

(1 +

1

n

)ndefines the number e in calculus. Estimate e by taking the value of this expression forn = 8, 82, 83, . . . , 810. Compare with e obtained from the exponential function on yourmachine. Interpret the results.

9

Page 16: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

1.4 Taylor’s Theorem

There are several useful forms of Taylor’s Theorem and it can be argued that it is the mostimportant theorem for the study of numerical methods.

Theorem 1.4.1 If the function f possess continuous derivatives of orders 0, 1, 2, . . . , (n+1)in a closed interval I = [a, b] then for any c and x in I,

f(x) =n∑k=0

f (k)(c)

k!(x− c)k + En+1

where the error tem En+1 can be given in the form of

En+1 =f (n+1)(η)

(n+ 1)!(x− c)n+1.

Here η is a point that lies between c and x and depends on both.

Note we can use Taylor’s Theorem to come up with useful series expansions.

Example: Use Taylor’s Theorem to find a series expansion for ex.

Here we need to evaluate the nth derivative of ex. We also need to pick a point of expansionor value for c.

We will choose c to be zero, and recall that the derivative of ex is such that

d

dxex = ex.

Thus, for Taylor’s Theorem we need:

f(0) = e0 = 1

f ′(0) = e0 = 1

f ′′(0) = e0 = 1

I see a pattern!

So we then have:

ex =f(0)

0!(x)0 +

f ′(0)

1!(x)1 +

f ′′(0)

2!(x)2 +

f ′′′(0)

3!(x)3 + . . .

=1

0!+x

1!x+

x2

2!+x3

3!+ . . .

=∞∑k=0

xk

k!for |x| <∞.

Note we should be a little more careful here, and prove that the series truly does convergeto ex by using the full definition given in Taylor’s Theorem.

10

Page 17: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

In this case we have:

ex =n∑k=0

ek

k!+

(n+ 1)!xn+1 (1.4.1)

which incorporates the error term. We now look at values of x in some interval around theorigin, consider −a ≤ x ≤ a. Then |η| ≤ a and we know

eη ≤ ea

Then the remainder or error term is such that:

limn→∞

∣∣∣∣ eη

(n+ 1)!xn+1

∣∣∣∣ ≤ limn→∞

∣∣∣∣ ea

(n+ 1)!an+1

∣∣∣∣ = 0

Then when the limit is taken of both sides of (1.4.1) it can be seen that:

ex =∞∑k=0

ek

k!

Taylor’s theorem can be useful to find approximations to hard to compute values:

Example: Use the first five terms in a Taylor’s series expansion to approximate the valueof e.

e ≈ 1 + 1 +1

2+

1

6+

1

24= 2.70833333333

Example: In the special case of n = 0 Taylors theorem is known as the Mean ValueTheorem.

Theorem 1.4.2 If f is a continuous function on the closed interval [a, b] and possesses aderivative at each point in the open interval (a, b), then

f(b) = f(a) + (b− a)f ′(η)

for some η in (a, b).

Notice that this can be rearranged so that:

f ′(η) =f(b)− f(a)

b− a

The right hand side here is an approximation of the derivative for any x ∈ (a, b).

11

Page 18: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

1.4.1 Taylor’s Theorem using h

Their is a more useful form of Taylor’s Theorem:

Theorem 1.4.3 If the function f possesses continuous derivatives of order 0, 1, 2, . . . , (n+1)in a closed interval I = [a, b], then for any x in I,

f(x+ h) = f(x) + f ′(x)h+1

2f ′′(x)h2 +

1

6f ′′′(x)h3 + . . .+ En+1

=n∑k=0

f (k)(x)

k!hk + En+1

where h is any value such that x+ h is in I and where

En+1 =f (n+1)(η)

(n+ 1)!hn+1

for some η between x and x+ h.

Note that the error term En+1 will depend on h in two ways.

• Explicitly on h with the hn+1 presence.

• The point η generally depends on h.

Note as h converges to zero we see the Error Term converges to zero at a rate proportionalto hn+1. Thus, we typically write:

En+1 = O(hn+1)

as h goes to zero. This is short hand for:

|En+1| ≤ C|h|n+1

where C is an upper bounding constant.

We additionally note that Taylor’s Theorem in terms of h may be written down specificallyfor any value of n, and thus represents a family of theorems, each with a specific order of happroximation.

f(x+ h) = f(x) +O(h)

f(x+ h) = f(x) + f ′(x)h+O(h2)

f(x+ h) = f(x) + f ′(x)h+1

2f ′′(x)h2 +O(h3)

12

Page 19: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

1.5 Gaussian Elimination

In the previous section we considered the numbers that are available for our use on a com-puter. We made note that their are many numbers (especially near zero) that are not machinenumbers, and when used in a computation these numbers result in numerical roundoff error.As the computation will use the closest available machine number.

Lets now look at how this roundoff error can come into play when we are solving the familiarlinear equation system:

Ax = b

The normal approach would be to compute A−1 and then use that to find x. However, thereare other questions that can come into play:

• How do we store a large system of this form on a computer?

• How do we know that the answer we receive is correct?

• Can the algorithm we use fail?

• How long will it take to compute the answer?

• What is the operation count for computing the answer?

• Will the algorithm be unstable for certain systems of equations?

• Can we modify the algorithm to control instabilities?

• What is the best algorithm for the task at hand?

• Matrix Conditioning Issues?

Lets start by considering the system of equations:

Ax = b

with

A =

1 2 4 . . . 2n−1

1 3 9 . . . 3n−1

1 4 16 . . . 4n−1

......

... . . ....

1 n+ 1 n+ 12 . . . n+ 1n−1

and the right hand side be such that

bi =n∑j=1

Ai,j

is the sum of any given row. Note then here the solution to the system will trivially be acolumn of ones. Here A is a well know and ‘poorly conditioned’ Vandermonde matrix.

13

Page 20: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

It may be useful to use the sum of a geometric series when coding this so that any row iwould look like:

n∑j=1

(1 + i)j−1xj =1

i((1 + i)n − 1)

The following is pseudo code for a Gaussian elimination procedure. Much like you would doby hand, our goal will be to implement and test this in MATLAB.

Listing 1.1: Straight Gaussian Elimination% Forward El iminat ion .f o r k = 1 to (n−1)

f o r i = (k+1) to nxmult = A( i , k )/A(k , k ) ;A( i , k ) = xmult ;f o r j = (k+1) to n

A( i , j ) = A( i , j ) − ( xmult )∗A(k , j ) ;endb( i , 1 ) = b( i , 1 ) − ( xmult )∗b(k , 1 ) ;

endend

% Backward Subs t i tu t i on .x (n , 1 ) = b(n , 1 ) /A(n , n ) ;f o r i = (n−1) to 1

sum = b( i , 1 ) ;f o r j = ( i +1) to n

sum = sum − A( i , j )∗x ( j , 1 ) ;endx ( i , 1 ) = sum/A( i , i ) ;

end

Write a piece of code that implements this algorithm.

14

Page 21: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

1.5.1 Assessment of Algorithm

In order to see how well our algorithm was performing the error can be considered. There areseveral ways of computing the error of a vector solution. The first is to consider a straightforward vector of the difference between the computed solution and the true solution:

e = xh − x.

A second method that is used when the true solution to a given problem is unknown is toconsider a residual vector.

r = Axh − b

Note the residual vector will be all zeros when the true solution is obtained. In order to geta handle on the size of either the residual vector or the error vector norms are often used.

A vector norm is any mapping from Rn to R that satisfies the following properties:

• ‖x‖ > 0 if x 6= 0.

• ‖αx‖ = |α| ‖x‖.

• ‖x+ y‖ ≤ ‖x‖+ ‖y‖. (triangle inequality).

where x and y are vectors in Rn, and α ∈ R.

Examples of vector norms include:

• The l1 vector norm:

‖x‖1 =n∑i=1

|xi|

• The Euclidean/ l2-vector norm:

‖x‖2 =

(n∑i=1

x2i

)1/2

• lp-vector norm:

‖x‖p =

(n∑i=1

xpi

)1/p

Note there are also norms for Matrices. More on this when condition number for matricesis discussed. Different norms of the residual and error vectors allow for a single value to beassessed rather than an entire vector.

15

Page 22: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

1.6 Improving Gaussian Elimination

For notes here we will follow Cheney’s presentation. The algorithm that we have implementedwill not always work! To see this consider the following example:

0x1 + x2 = 1

x1 + x2 = 2

The solution to this system is clearly x1 = 1 and x2 = 1; However, our Gaussian Eliminationalgorithm will fail! (Division by zero.) When algorithms fail this tells us to be skeptical ofthe results for values near the failure.If we apply the Gaussian Elimination Algorithm to the following system what happens?

εx1 + x2 = 1

x1 + x2 = 2

After step one:

εx1 + x2 = 1

+(1− ε−1)x2 = 2− ε−1

Doing the back solve yields:

x2 =2− ε−1

1− ε−1However we make note that the value of ε is very small, and thus

ε−1 is very large

x2 =2− ε−1

1− ε−1≈ 1

andx1 = ε−1(1− x2) ≈ 0.

These values are not correct as we would expect in the real world to obtain values of

x1 =1

1− ε≈ 1 and x2 =

1− 2ε

1− ε≈ 1

How could we fix the system/algorithm?

• Note that if we had attacked the problem considering the second equation first therewould have been no difficulty with division by zero.

• A second issue comes from the coefficient ε being very small compared with the othercoefficients in the row.

16

Page 23: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

• At the kth step in the gaussian elimination process. The entry akk is known as thepivot element or pivot. The process of interchanging rows or columns of a matrix isknown as pivoting and alters the pivot element.

We aim to improve the numerical stability of the numerical algorithm. Many differentoperations may be algebraically equivalent; however, produce different numerical resultswhen implemented numerically.

The idea becomes to swap the rows of the system matrix so that the entry with the largestvalue is used to zero out the entries in the column associated with that variable during Gaus-sian Elimination. This is known as partial pivoting and is accomplished by interchangingtwo rows in the system.

Gaussian Elimination with full pivoting or complete pivoting would select the pivot entryto be the largest entry in the sub-matrix of the system and reorder both rows and columnsto make that element the pivot element.

Seeking the largest value possible hopes to make the pivot element as numericallystable as possible. This makes the process less susceptible to roundoff errors.However, the large amount of work is usually not seen as worth the extra effortwhen compared with partial pivoting.

An even more sophisticated method would be scaled partial pivoting. Here the largestentry in each row si is used when picking the initial pivot equation. The pivot entry isselected by dividing current column entries (for the current variable) by the scaling value sifor each row, and taking the largest as the pivot row (see the Cheney text for an exampleand the Pseudocode).

Simulates full pivoting by using an index vector containing information aboutthe relative sizes of elements in each row.

• The idea here as that these changes to the Gaussian Elimination algorithm will allowfor zero pivots and small pivots to be avoided.

• Gaussian Elimination is numerically stable for diagonally dominant matrices or matri-ces that are symmetric positive definite.

• The Matlab backslash operator attempts to use the best or most numerically stablealgorithm available.

17

Page 24: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

18

Page 25: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Chapter 2

Appendices

19

Page 26: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Numerical Methods Notes Draft: February 22, 2017

20

Page 27: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Bibliography

[1] R. Burden and J. Faires. Numerical Analysis. Brooks/Cole, Boston, ninth edition edition,2011.

[2] W. Cheney and D. Kincaid. Numerical Mathematics and Computing. Brooks/Cole,Boston, seventh edition, 2012.

[3] M.T. Heath. Scientific Computing: An Introductory Survey, 2nd Edition. McGraw-Hill,New York, 2002.

21

Page 28: Notes: Introduction to Numerical Methodsjchrispe/MATH_250/NotesFull.pdfNotes: Introduction to Numerical Methods J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania

Index

Dirichlet Boundary Conditions, 24

Forward Euler, 24full pivoting, 17

intermediate value theorem, 29

Laplace Operator, 20

partial pivoting, 17predator-prey problems, 47

scientific notation, 10

22