22
NASA Technical Memorandum 101466 ICOMP-89-2 On the Equivalence of Gaussian Elimination and Gauss-Jordan Reduction in Solving Linear Equations Ei89-2C7 10 CSCL 128 Unclas G3/64 0 Y9C 177 Nai-ban Two Wayne State University Detroit, Michigan and Institute for Computational Mechanics in Propulsion Lewis Research Center Cleveland, Ohio February 1989 LEWIS RfYARCHCENTER ICOMP CASE WESTERN RESLRVf UNIVERSITY brought to you by CORE View metadata, citation and similar papers at core.ac.uk provided by NASA Technical Reports Server

On the Equivalence of Gaussian Elimination and Gauss

  • Upload
    others

  • View
    13

  • Download
    0

Embed Size (px)

Citation preview

Page 1: On the Equivalence of Gaussian Elimination and Gauss

NASA Technical Memorandum 101466 ICOMP-89-2

On the Equivalence of Gaussian Elimination and Gauss-Jordan Reduction in Solving Linear Equations

Ei89-2C7 10

C S C L 128 Unclas G3/64 0 Y9C 177

Nai-ban Two Wayne State University Detroit, Michigan

and Institute for Computational Mechanics in Propulsion Lewis Research Center Cleveland, Ohio

February 1989

LEWIS RfYARCH CENTER

ICOMP CASE WESTERN

RESLRVf UNIVERSITY

https://ntrs.nasa.gov/search.jsp?R=19890011339 2020-03-20T02:40:33+00:00Zbrought to you by COREView metadata, citation and similar papers at core.ac.uk

provided by NASA Technical Reports Server

Page 2: On the Equivalence of Gaussian Elimination and Gauss

On the Equivalence of Gaussian Elimination and Gauss-Jordan Reduction in Solving Linear Equations

Nai-kuan Tsao* Wayne State University Detroit, Michigan 48202

and Institute for Computational Mechanics in Propulsion Lewis Research Center Cleveland, Ohio 44135

Abstract

A novel general approach to round-off error analysis using the error complexity concepts is

described. This is applied to the analysis of the Gaussian Elimination and the Gauss-Jordan scheme

for solving linear equations. The results show that the two algorithms are equivalent in terms of

our error complexity measures. Thus the inherently parallel Gauss-Jordan scheme can be imple-

mented with confidence if parallel computers are available.

'This work was supported in part by the Space Act Agreement C W G while the author was visiting ICOMP, NASA Lewis Research Center.

Page 3: On the Equivalence of Gaussian Elimination and Gauss

1. Introduction

dct(A) =

In past dccadcs, the backward error analysis of \Vilkinson[3] has cnablcd us to gain much insight

into thc behavior of round-off error propagation in numcrical algebraic nlgorithmg. One is required,

naturally, to possess varying degrees of mathematical sophistication in order to apply his approach

to the analysis of numerical algorithms on hand.

‘11 ‘I2 ‘I3

‘21 ‘22 ‘23

‘31 ‘32 ‘33

’l’he algorithms of Gaussian I’limination (GI?) and Gauss-Jordan reduction (GJ) arc well known

in solving linear equations. The numerical stabhty of Gaussian Elimination with partial pivoting

is shown in [3] , and the stability of Gauss-Jordan reduction is shown in [4] , all using Wilkinson’s

approach. ‘Ihc results show that in Gaussian Iilimination the computed solution x of a given sys-

tem Ax = b is the exact solution of some neighboring system ( A + h)x = b with a reasonable bound

for E, whereas in Gauss-Jordan reduction thc computed solution X is such that each component

o f X bclongs to the exact solution of a diffcrerit ncighboring system.

In this paper we cxtcnd thc crror complexity conccpts in 123 to division operation and apply

it to the analysis of the Gauss-Jordan and Gaussian Elimination algorithms to show from an al-

ternative point of view that thc two algorithms arc indccd equivalent in terms of our error com-

plexity measures. Thus one can implement the inherently parallel Gauss-Jordan reduction

algorithm in with confidence for parallcl computers. Some prcliminary rcsults are prcsentcd in

Section 2. They are applied in Scction 3 to the error complcxity analysis of thc two algorithms for

solving linear system of equations.

2. Some Preliminary Results

Considcr the simple problem o f evaluating thc 3 by 3 dctcnninmt

A straightforward algorithm would evaluate (2.1) as

Page 4: On the Equivalence of Gaussian Elimination and Gauss

c

l e

O n the otherhand, one can also express (2.1) as

(2.2) dct(A) = UI , ‘“22a”33

where

Expanding (2.2), we have

Which is equivalent to (2.1) once the term

‘21 ‘31 Q 1 1 ~ ‘ 1 2 ~ u 1 3

is canccllcd out. This is of course unlikely in actual computation due to round-off errors. Thus

the latter approach is more likely to incur round-off errors during the computational process.

Our approach in the error analysis of different algorithms designed for the same problem strives

to provide such information as the number of round-off error occurrences as well as the extra terms

created during the computational process for easy comparison and at the same time enable US to

gain more insight into the details of how each algorithm works.

Given a normalized floating-point system with a t-digit base p mantissa, the following equations

can be assumed to facilitate the error analysis of general arithmatic expressions using only

+, -, x , or / operations[3]:

where 3

Page 5: On the Equivalence of Gaussian Elimination and Gauss

for rounded operations IAI 51 + u , U S

for chopped operations

and x and y are given machine floating-point numbers and /7(.) is used to denote the computed

floating-point result of the g i im arbwment. We shall call A the unit A -factor.

In general one can apply ( 2 . 3 ) repeatedly to a sequence of arithmetic steps, and the computed

result z can be expressed as

(2.4)

where each z,,, or zd, is an exact product of error-free data, and Ak stands for the product of k pos-

sibly different A-factors. We should emphasize that all common factors between the numerator and

denominator should have been factored out before z can be expressed in its final rational form of

(2.4). Following [I ] , we shall henceforth call such an exact product of error-free data a basic term,

or simply a term. 'I'hus ,I(z,,) or A(&) is then the total number of such terms whose sum constitutes

z,, or z,, respectively, and u(z,,) or o(zd,) gives the possible number of round-off occurrences during

the computational process. We define the following two measures:

maximum error complexity:

cumulative error complexity:

i= I j= I

Page 6: On the Equivalence of Gaussian Elimination and Gauss

IliITerent algorithms used to compute the s m c z can then be compared using the above error

complexity measures and the number of basic terms created by each algorithm.

I:or convenience we will use ch,(z) and ch,(z) to represent the 3-tuples {A(zd), u(zd), .T(z,)} and

{I(z,), ~ ( z , , ) , s(zn)} , respectively, so that the computed z of (2.4) is fully charactcrkd by

The unit A-factor is then charactcn/.cd by

( 2 . 7 ~ ) . &(A) = { 1 , 1 , 1 } E {A}

One can also obtain easily using (2.5) and (2 .6) that

(2.7b) C/[(A’) = &A) = {A}’ = { 1 ,i,il.

In division-frcc computations any computed z will have only the numerator part ch,,(z). The

following lemma is useful in dealing with intcrmediatc computed results:

Lxmma 2.1 Given x and y with their associatcd ch,,(x) and ch,,b),

(ij if z = xy, then

(ii) if z = x y , then

I’roof. The results can be obtained easily by expressing x and y as

5

Page 7: On the Equivalence of Gaussian Elimination and Gauss

i= 1 j= 1

and appl)ing ( 2 . 5 ) , ( 2 . 6 ) and the dclinition of 2(zJ t o find ch,,(z). Q.I<.II.

For general floating-point computations, we haw the following lemma:

I cmma 2 .2 Given x and y with thcu associatcd

( i ) if z =fl (x t y ) and there is no commun factors between xd and yd, then

where

(ii) if z =/I(x x y ) and there is no common factors bctwccn x, and y d or bctwecn y, and x, , then

where

6

Page 8: On the Equivalence of Gaussian Elimination and Gauss

(i) if z =fl (x /y) and there is no common factors between x, and y,, or bctwcen x, and yd , then

whcrc

Proof. first we apply ( 2 . 3 ) to cadi case and obtain

‘The results can then be obtained easily by using Ixmmas 2.1 and 2.2. Q.L.D.

3. Error Complexity Analysis of the Two Algorithms

Given

A x = h

where

7

Page 9: On the Equivalence of Gaussian Elimination and Gauss

A = , h =

it is desired to find x. We shall assume that A , b are error-free with ).(a,,) = A(h,) = 1 and pivoting

is not necessary. For simplicity, the b vector is appended to A as the (,V + 1)-st column of A . ' Ihus

initially the augmented matrix is such that

C] i f i = j = 1,

cl othcnvisc . ch(ui/) =

where

The use of C, to distinbwish ch(a,,) from all othcrs is to facilitate the task of identifying common

factors as we shall see later. The two algorithms arc as folloLvs:

Algorithm GE.

1 . {begin Reduction to Trianbwlar 170rm} for i = 1 to A'- I do

f o r k = i + 1 t o N d o = f l ( " k , / " , < )

f o r j = i + 1 t o A'+ 1 do ' k = f l ( " k , - . "k t x

2. { bcgin dack-Substitution} xN 7 fl('N,N+ I /'N.V)

for I = A'- 1 downto 1 do f o r i = iV doxnto i + 1 do

' , .N+I = f l ( " , . N + l - a<, x x,) x, = fl(',*N+ I/',,)

Algorithm GJ.

1. {begin Reduction to Diagonal Form} for i = 1 to .V do

for k = 1 to A' (except i ) do

f o r j = i + 1 to i V + 1 do ' k , = f l ( ' k , / ' t , )

' h i = f l ( a k , - ' k , ' ! I )

2. {begin Solvtng Diagonal System} for i = 1 t o N do

8

Page 10: On the Equivalence of Gaussian Elimination and Gauss

A closer examination o f the two algorithms reveals that the computed lowcr triangular part of

the matrix A arc identical. ‘l‘lic only JifTcrencc bctwccn the two lics in the solution of the upper

triangular systcms: in lUgorithm GI: the back-substitution schcme is used, whcrcas in Algorithm

GJ a forward-elimination scheme is used to reduce the upper triangular form to diagonal form for

the final solution. Thus error-wise thc Algorithm GJ is equivalent to thc following modified one:

Algorithm GJm.

1.

2.

{begin Reduction to Upper Triangular Form} same as in Algorithm <;E. {begin Reduction of Upper ’I‘riargwlar to Diagonal Iiorm) for i = 1 to N - I do

for k = 1 to i do a&,,+ 1 = fl(ak,- I/% I , , + I)

f o r j = i + 2 t o ,\‘+ 1 do = j 7 ( “ k f - a k , , + l at+ I,,)

3. {begin Solving Diagonal System} same as in Step 2 of Algorithm GJ.

We first give a dctailcd anallsis of Step 1 of both Algorithm <;I< and Algorithm CiJm for

N = 3 . Applying ( 2 . 3 ) to Step 1, we obtain, after the first iteration for i = 1 is completed, the fol-

lowing computed results:

“ I 1 ‘12 ‘13 “I4

“’21 “ 2 2 “ 2 3 “ 2 4

“’31 “ 3 2 “’33 a‘34

wherc

Applying Iammas 2.1 and 2.2 to the above equation, wc obtain thc following matrix for &(a,,):

9

Page 11: On the Equivalence of Gaussian Elimination and Gauss

where

3 - c2 = { A 2 , a2, . T ~ } = C2 = 7f2, .T,) = clCi { A } + clcl { A } = { l , l , l } + { 1,3,3) = { 2,3,4}.

Sotc sincc u,, is the common dcnorninator o f all newly computed results, hence C, is also present

as the dcnorninator of all computed itcms. Again C2 is used to distingwish ch(u’,,) from the rest of

ch(a’,,).

Similarly after the completion of the second iteration for i = 2 of Step 1 in both algorithms,

we have the following computed rewlts:

where

Yote in the abovc cxprcssion. the term a,,u,,%,,,a,, is not likely to be canccllcd out in actual com-

putation as cxpccted in ideal computation. ‘l‘hc ch-matrix can be obtaincd from thc abovc ex-

pression as

where

10

Page 12: On the Equivalence of Gaussian Elimination and Gauss

In gcncral we have the following theorem:

Theorem 3.1 The ch-matrix of the ncwly computcd part of ti, aftcr the i-th iteration of Step

1 of Algorithm GE is complcted, is given as follows:

f o r j = k = i + 1

r= 1

whcrc

Proof. See Appendix I.

Once we have the original system reduced t o an upper triangular form, the rest of the steps in

Algorithms GI: and GJm can bc applied. We havc thc following theorems:

Theorem 3.2 The ch-vector of the computcd solution x using Algorithm GE is given as

CiCi+, ... CtV 3 ,A’-i ch(xi)= -- - ({A} + { A } ) {A}, 1 < i < N . cjci+ , . . . CN

Proof. See Appendix 11.

Theorem 3.3 The ch-matrix of the newly computed results, after the completion of the i-th

iteration of Step 2 of Algorithm GJm, is given as:

Page 13: On the Equivalence of Gaussian Elimination and Gauss

i

i+ 1

r=k Ch(Ukj) = i+ 1 , Ilkli, i + 2 s j s N + 1

r= 1 ,r#k

The ch-vcctor of the computcd solution x is the samc as that given in Theorem 3.2.

Proof. See Appendix 111.

From Theorem 3.3 MY conclude that illgorithm GI; and Algorithm G J arc equivalent to each

other in t c m s of our error complexity mcasurcs.

12

Page 14: On the Equivalence of Gaussian Elimination and Gauss

Rcfcrcnccs

[ 13 V.R. Aggmal and J . W . n u r p c i e r , A round-off error model with applications to arithmetic

cxprcssions, SIAM J. Computing, 8( l979), pp. 60-72.

[2] S . K . Tsao, A simple approach to the error analysis of division-free nurncrical algorithms,

I E I ~ I ~ , l’rans. Computcrs, C-32( 1983), pp. 343-351.

[ 3 ] J.11. Wilkinson, Rounding Errors in Algebraic Processes, I’rcntice-1 Iall, Englcwood Cliffs, NJ,

1963.

[4] G. I’ctcrs and J.1 I . Wilkinson, On thc stability of Gauss-.lorJan elimination with pivoting,

Comm. ACM, 18( 1975), pp. 20-2-1.

13

Page 15: On the Equivalence of Gaussian Elimination and Gauss

Appendix 1.

____ Proof of ‘Thcorcm 3.1. We pro1.c by induction on i. I:or i = 1 it is true by the rcsrilts dcm-

onstratcd tor .1’ = 3. Assumc the truth for i - I . Yow f o r i we havc

By assumption,

r= 1 r= 1

hence

1 Ience

r= 1

‘i+ 1 i f k = j = i + l ,

otherwise.

This proves the theorem. Q.E.D.

14

Page 16: On the Equivalence of Gaussian Elimination and Gauss

Appendix 11.

I'roof of 'I'heorcm 3.2. We prove by backward induction on i. I:or i = A',

x.V =f l (aN,N+l I a N N ) .

So that

I3y 'I'hcorcm 3.1 we havc

r= 1 r= 1

I Ience

which is truc. Now assuming thc thcorcm is truc lor xN, xN ... , x,, '1'0 obtain x, we firs

Yo = % N + l

f o r j = IV downto i + 1 do

Y N , i I =flQN-, -

and then compute x, =flQNJu,,). By (2.3) we have

form

By using Ixmma 2.2 we obtain 15

Page 17: On the Equivalence of Gaussian Elimination and Gauss

r= 1

It is easily shown that thc solution t o thc abovc cquation is

r= 1

I 'I'horefore

Now

hence

Since

r= 1

, I

the common factor nZ, can be cancelled and we obtain finally I. I

3 N-i cici+* ... c,y

cici+l ... CAI ch(xi)= _ _ - ({A} + ( A } ) ( A > .

16

Page 18: On the Equivalence of Gaussian Elimination and Gauss

'l'his provcs thc thcorcm. Q.I;.D.

17

Page 19: On the Equivalence of Gaussian Elimination and Gauss

By the induction assumption 18

hence

Also

I Icncc

and thc thcorcm is true. Assume now thc theorcm is truc for all i up to i - I . ],.or i we have

I Icnce

.

Page 20: On the Equivalence of Gaussian Elimination and Gauss

I

r=l ,r#k r= 1

1 Icnce

For

then

r - l .r#k i i+ I

r=k r=k - - i+ 1 n cr

r=l ,r#k i+ 1

( n c r ) ( ( A } + (A)3)i-k+l r=k - since c ~ + ~ = c ~ + ~ . - -

i+ I

r=l ,r#k

7'hs proves the first part of the theorem. For thc computed solution x, we have x, = ~ ( u , , ~ + , / u , , ) ,

hence 19

Page 21: On the Equivalence of Gaussian Elimination and Gauss

Since

.

N

r= 1 ,r#i r= I

therefore

'I'h~s proves the theorem. Q.E.11.

20

Page 22: On the Equivalence of Gaussian Elimination and Gauss

NASA TM-I01466 1 Report No

17. Key Words (Suggested by Author@))

Error analysis; Parallel algorithms; Gaussian elimination; Gauss-Jordan reduction; Linear equations

Report Documentation Page

18. Distribution Statement

Unclassified- Unlimited Subject Category 64

ICOMP-89-2 I

19. Security Classif. (of this report) 20. Security Classif. (of this page) 21. No of pages

unclassified Unclassified 22

. - ~ - - 1 2 Government Accession No

22. Price'

A03

4 Title and Subtitle

On the Equivalence of Gaussian Elimination and Gauss-Jordan Reduction in Solving Linear Equations

7 Author@)

Nai-kuan Tsao

9 Performing Organization Name and Address

National Aeronautics and Space Administration Lewis Research Center Cleveland, Ohio 44 135-3 I91

12. Sponsoring Agency Name and Address

National Aeronautics and Space Administration Washington. D.C. 20546-0001

3 Recipient's Catalog No

__ - - ._.__ - _____ ~

5 Report Date

February I989

6 Performing Organization Code

8. Performing Organization Report No.

E 4 5 7 7

IO. Work Unit No.

505-62-2 I

11 Contract or Grant No.

13. Type of Report and Period Covered

Technical Memorandum

14. Sponsoring Agency Code

15. Supplementary Notes

Nai-kuan Tsao, Wayne State University, Detroit. Michigan 48202. This work was supported in part by the Institute for Computational Mechanics in Propulsion. NASA Lewis Research Center (work funded under Space Act Agreement C99066G).

16. Abstract

A novel general approach to round-off error analysis using the error complexity concepts is described. This is applied to the analysis of the Gaussian Elimination and the Gauss-Jordan scheme for solving linear equations. The results show that the two algorithms are equivalent in terms of our error complexity measures. Thus the inher- ently parallel Gauss-Jordan scheme can be implemented with confidence if parallel computers are available.