25
Numerical Computation Lecture 8: Matrix Algorithm Analysis United International College

Numerical Computation

  • Upload
    tricia

  • View
    61

  • Download
    0

Embed Size (px)

DESCRIPTION

Numerical Computation. Lecture 8: Matrix Algorithm Analysis United International College. Review Presentation. 10 minutes for review presentation. Review. During our Last Class we covered: Gauss-Jordan Method for finding Inverses. Today. We will cover: - PowerPoint PPT Presentation

Citation preview

Page 1: Numerical Computation

Numerical Computation

Lecture 8: Matrix Algorithm Analysis

United International College

Page 2: Numerical Computation

Review Presentation

• 10 minutes for review presentation.

Page 3: Numerical Computation

Review

• During our Last Class we covered:– Gauss-Jordan Method for finding Inverses

Page 4: Numerical Computation

Today

• We will cover:– Operation count for Gaussian Elimination, LU

Factorization– Accuracy of Matrix Methods– Readings: • Pav, section 3.4.1• Moler, section 2.8

Page 5: Numerical Computation

Operation Count for Gaussian Elimination

• How many floating point operations (+-*/) are used by the Gaussian Elimination algorithm?

• Definition: Flop = floating point operation. We will consider a division to be equivalent to a multiplication, and a subtraction equivalent to an addition.

• Thus, 2/3 = 2*(1/3) will be considered a multiplication.

• 2-3 = 2 + (-3) will be considered an addition.

Page 6: Numerical Computation

Operation Count for Gaussian Elimination

• In Gaussian Elimination we use row operations to reduce

• to

Page 7: Numerical Computation

Operation Count for Gaussian Elimination

• Consider the number of flops needed to zero out the entries below the first pivot a11 .

Page 8: Numerical Computation

Operation Count for Gaussian Elimination

• First a multiplier is computed for each row below the first row. This requires (n-1) multiplies.

m = A(i,k)/A(k,k); • Then in each row below row 1 the algorithm

performs n multiplies and n adds. (A(i,j) = A(i,j) - m*A(k,j);) • Thus, there is a total of (n-1) + (n-1)*2*n flops for

this step of Gaussian Elimination. • For k=1 algorithm uses 2n2 –n -1 flops

Page 9: Numerical Computation

Operation Count for Gaussian Elimination

• For k =2, we zero out the column below a22 .• There are (n-2) rows below this pivot, so this

takes 2(n-1)2 –(n-1) -1 flops. • For k -3, we would have 2(n-2)2 –(n-2) -1 flops,

and so on. • To complete Gaussian Elimination, it will take

In flops, where

Page 10: Numerical Computation

Operation Count for Gaussian Elimination

• Now,

• So, In = (2/6)n(n+1)(2n+1) – (1/2)n(n+1) – n

= [(1/3)(2n+1)-(1/2)]*n(n+1) – n = [(2/3)n – (1/6)] * n(n+1) - n = (2/3)n3 + (lower power terms in n)• Thus, the number of flops for Gaussian

Elimination is O(n3).

Page 11: Numerical Computation

Operation Count for LU Factorization

• In the algorithm for LU Factorization, we only do the calculations described above to compute L and U. This is because we save the multipliers (m) and store them to create L.

• So, the number of flops to create L and U is O(n3).

Page 12: Numerical Computation

Operation Count for using LU to solve Ax = b

• Once we have factored A into LU, we do the following to solve Ax = b:

• Solve the two equations:• Lz = b• Ux = z

• How many flops are needed to do this?

Page 13: Numerical Computation

Operation Count for using LU to solve Ax = b

• To solve Lz=b we use forward substitution

z =

z1 = b1 , so we use 0 flops to find z1.z2 = b2 – l21 *z1 , so we use 2 flops to find z2 .z3 = b3 – l31 *z1 – l32 *z2 , so we use 4 flops to find z2 , and so on.

Page 14: Numerical Computation

Operation Count for using LU to solve Ax = b

• To solve Lz=b we use forward substitution

z =

• Totally, 0+2+4+ … + 2*(n-1)= 2*(1+2+…+(n-1)) = 2*(1/2)*(n-1)(n) = n2 – n. • So, the number of flops for forward substitution is

O(n2).

Page 15: Numerical Computation

Operation Count for using LU to solve Ax = b

• To solve Ux=z we use backward substitution

• A similar analysis to that of forward substitution shows that the number of flops for backward substitution is also O(n2).

• Thus, the number of flops for using LU to solve Ax=b is O(n2).

Page 16: Numerical Computation

Summary of Two Methods

• Gaussian Elimination requires O(n3) flops to solve the linear system Ax = b.

• To factor A = LU requires O(n3) flops • Once we have factored A = LU, then, using L

and U to solve Ax = b requires O(n2) flops.

• Suppose we have to solve Ax = b for a given matrix A, but for many different b vectors. What is the most efficient way to do this?

Page 17: Numerical Computation

Accuracy of Matrix Methods

• Our algorithms are used to find the solution x to the system Ax=b.

• But, how close to the exact solution is the computed solution?

• Let x* be the computed solution and x be the exact solution.

Page 18: Numerical Computation

Accuracy of Matrix Methods• Definition: The error e is defined to be

e = x - x*

• Definition: The residual r is defined to be

r = b – Ax*

• Note: These two quantities may be quite different!

Page 19: Numerical Computation

Accuracy of Matrix Methods

• Consider the system:

• Using Gaussian Elimination with partial pivoting, we swap rows 1 and 2

217.0254.0

563.0780.0659.0913.0

2

1

xx

254.0217.0

659.0913.0563.0780.0

2

1

xx

Page 20: Numerical Computation

Accuracy of Matrix Methods

• Suppose we had a computer with just 3 digit accuracy. The first multiplier would be:

• Subtracting 0.854*row 1 from row 2, we get

Page 21: Numerical Computation

Accuracy of Matrix Methods

• Solving this using back substitution gives

• So,

Page 22: Numerical Computation

Accuracy of Matrix Methods

• The exact solution is

• Thus, the error is

• The error is bigger than the solution!!

000.2557.0

000.1000.1)443.0(000.1

*xxe

Page 23: Numerical Computation

Accuracy of Matrix Methods

• The residual is

• The residual is very small, but the error is very large!

Page 24: Numerical Computation

Accuracy of Matrix Methods

• Theorem (sort of): Gaussian elimination with partial pivoting is guaranteed to produce small residuals.

• Why is the error in our example so large?

Page 25: Numerical Computation

Accuracy of Matrix Methods

• If we did Gaussian Elimination with much higher accuracy (more than 3 digits) we would see that the row reduction produces:

• This matrix is very close to being singular (why?)• The relationship between the size of the residual and

the size of the error is determined in part by a quantity known as the condition number of the matrix, which is the subject of our next lecture.