26
Journal of Scientific Computing, Vol. 2, No. 1, 1987 An Efficient Method for Computing Leading Eigenvalues and Eigenvectors of Large Asymmetric Matrices I. Goldhirsch, l'2 Steven A. Orszag, 1 and B. K. Maulik 1 Received September 25, 1986 An efficient method for computing a given number of leading eigenvalues (i.e., having largest real parts) and the corresponding eigenvectors of a large asym- metric matrix M is presented. The method consists of three main steps. The first is a filtering process in which the equation ~ = Mx is solved for an arbitrary initial condition x(0) yielding: x(t)= eMtx(0). The second step is the construc- tion of (n + 1 ) linearly independent vectors vm = Minx, 0 ~<m ~<n or v,, = emM~x (r being a "short" time interval). By construction, the vectors v,~ are com- binations of only a small number of leading eigenvectors of M. The third step consists of an analysis of the vectors {Vm} that yields the eigenvalues and eigen- vectors. The proposed method has been successfully tested on several systems. Here we present results pertaining to the Orr-Sommerfeld equation. The method should be useful for many computations in which present methods are too slow or necessitate excessive memory. In particular, we believe it is well suited for hydrodynamic and mechanical stability investigations. KEY WORDS: Iterative methods; asymmetric matrices; eigenvalues. 1. INTRODUCTION Numerous problems in science and technology involve the computation of leading eigenvalues (and sometimes of the corresponding eigenvectors) of large asymmetric matrices. In some cases, such as that of the calculation of eigenvalues of transfer matrices in lattice problems (Stanley, 1971) or in hydrodynamic stability analyses (Drazin and Reid, 1981), the matrix of interest is infinite, in principle. i Applied & Computational Mathematics, Princeton University, Princeton, New Jersey 08544. 2 Permanent address: Department of Fluid Mechanics and Heat Transfer, Faculty of Engineering, Tel-Aviv University, Ramat Aviv, Tel-Aviv 69978, Israel. 33 0885-7474/87/0300-0033505.00/0 1987 Plenum Publishing Corporation 854/2/1-3

An Efficient Method for Computing Leading

Embed Size (px)

Citation preview

Page 1: An Efficient Method for Computing Leading

Journal of Scientific Computing, Vol. 2, No. 1, 1987

An Efficient Method for Computing Leading Eigenvalues and Eigenvectors of Large Asymmetric Matrices

I. Goldhirsch, l'2 Steven A. Orszag, 1 and B. K. Maulik 1

Received September 25, 1986

An efficient method for computing a given number of leading eigenvalues (i.e., having largest real parts) and the corresponding eigenvectors of a large asym- metric matrix M is presented. The method consists of three main steps. The first is a filtering process in which the equation ~ = Mx is solved for an arbitrary initial condition x(0) yielding: x ( t ) = eMtx(0). The second step is the construc- tion of (n + 1 ) linearly independent vectors vm = Minx, 0 ~< m ~< n or v,, = emM~x (r being a "short" time interval). By construction, the vectors v,~ are com- binations of only a small number of leading eigenvectors of M. The third step consists of an analysis of the vectors {Vm} that yields the eigenvalues and eigen- vectors. The proposed method has been successfully tested on several systems. Here we present results pertaining to the Orr-Sommerfeld equation. The method should be useful for many computations in which present methods are too slow or necessitate excessive memory. In particular, we believe it is well suited for hydrodynamic and mechanical stability investigations.

KEY WORDS: Iterative methods; asymmetric matrices; eigenvalues.

1. I N T R O D U C T I O N

Numerous problems in science and technology involve the computation of leading eigenvalues (and sometimes of the corresponding eigenvectors) of large asymmetric matrices. In some cases, such as that of the calculation of eigenvalues of transfer matrices in lattice problems (Stanley, 1971) or in hydrodynamic stability analyses (Drazin and Reid, 1981), the matrix of interest is infinite, in principle.

i Applied & Computational Mathematics, Princeton University, Princeton, New Jersey 08544. 2 Permanent address: Department of Fluid Mechanics and Heat Transfer, Faculty of

Engineering, Tel-Aviv University, Ramat Aviv, Tel-Aviv 69978, Israel.

33

0885-7474/87/0300-0033505.00/0 �9 1987 Plenum Publishing Corporation 854/2/1-3

Page 2: An Efficient Method for Computing Leading

34 Goldhirsch, Orszag, and Maulik

While rather efficient methods exist for the diagonalization of sym- metric matrices (see, e.g., Parlett, 1980 or Lewis, 1977), no satisfactory algorithm for the asymmetric case is known to us. The main obstacle is, of course, the nonorthogonality of the eigenvectors. The Arnoldi (1951) method and its variants enable the calculation of some eigenvalues, but not necessarily those with the largest real parts (which are important in stability investigations; see, however, Jenning and Stewart, 1975, and Saad, 1980 and references therein). At best this method produces eigenvalues of largest moduli. Another disadvantage of the Arnoldi method is the necessity to increase the dimension of the Krylov (Wilkinson, 1965) subspace con- sidered in order to improve accuracy. This may create memory problems.

Another method (Bennetin et al., 1980; Shimada and Nagashima, 1979), which was designed to compute Liapunov exponents for dynamical systems, is capable of producing the real parts of the leading eigenvalues. Its advantage is in the fact that its implementation does not require a large memory. However the method is slowly converging and produces neither the imaginary parts of the eigenvalues nor the eigenvectors. (However, see Goldhirsch et al., 1987, for an accelerated convergence method and com- putation of eigenvectors. The latter can be computed efficiently for relatively small systems.)

It thus seems that many important problems involving large asym- metric matrices are very difficult to analyze using available methods. The algorithm proposed in this paper is designed for such problems. It is fast and simple and can therefore be easily implemented.

The proposed method has three essential steps. The first step is designed to filter out nonleading eigenvectors. Let x be an "initial" vector and M the matrix whose leading eigenvalues are sought. We first solve the equation

dx - - = M x (1.1) dt

for 0 ~< t ~< to. The resulting vector X(to) = eMt~ is obviously a combination of eigenvectors corresponding to leading eigenvalues (henceforth called leading eigenvectors). The nonleading eigenvectors are being damped by exponential factors. Moreover, if it so happens that the chosen initial vec- tor x is independent of a leading eigenvector, then this eigenvector will be introduced into x(t) by roundoff errors in the process of numerical integration of Eq. (1.1). The next step involves the creation of n linearly independent vectors. This can be done either by computing {Mmx(to); 0 <~ m ~< n - 1 } or by computing {emM~x(to); 0 ~< m ~< n -- 1 }. It is obvious (and shown below) that the n vectors created this way are independent for non-

Page 3: An Efficient Method for Computing Leading

Large Asymmetric Matrices 35

degenerate spectra. A method for degenerate spectra will be described below as well. Once we obtain n independent vectors that are essentially linear combinations of leading eigenvectors only, it is a matter of straightforward algebra to produce the leading eigenvalues and eigenvec- tors.

The structure of the paper is as follows. Section 2 presents a descrip- tion of the method we propose. Section 3 offers error estimates and analyses of the algebraic structure of the vectors produced in the second step of the method. Section 4 presents two algorithms based on the proposed method. Section 5 presents results obtained from an implemen- tation of the method in the case of the Orr-Sommerfeld equation. Section 6 offers a brief summary. Those readers not interested in the details of the mathematical analysis of the method may skip Section 3.

2. DESCRIPTION OF THE METHOD

Consider a matrix M, which is, in general, asymmetric and large (say of rank R). Assume that the eigenvalues of M: 21 , 22, 23,... are arranged so that Re 21 > R e 22> Re 23 ..... i.e., it is assumed that the real part of the spectrum of M is nondegenerate. Let el, e2, %,... be the right eigenvectors of M corresponding to the eigenvalues 21, 22, 23,..., respectively.

Any vector x can be expanded in terms of the right eigenvectors of M:

R X: 2 ~ (2 . [ )

i=1

Upon applying the operator e M', t > 0, to x we obtain

R eM'x = Y' ~ie~'ei (2.2)

i=l

Defining the resulting vector in (2.2) as x(t), we observe that the vector

x(t) Mr) = - - (2.3)

Ix(t)l

can be approximated, for large enough t, by

1 n _ o~ie e i (2.4a) ~(t ) ~ i_~1 .~.,t

o r

X(t),~, ~ fliei (2.4b) i=1

Page 4: An Efficient Method for Computing Leading

36 Goldhirsch, Orszag, and Maulik

where O~i e2it

/~ = (2.4c) Ix(t)l

and n is a suitably chosen (truncation) integer. The error involved in the approximation (2.4) is of the order le~"+~-~l)~[. Thus, for a given desired accuracy there are values of n and of t that fulfill these requirements. The way t and n are to be chosen in a practical algorithm is explained below. In the rest of this section it is assumed that (2.4) is an actual equality in order to simplify the presentation.

Consider the following vectors:

v ~ = M m l~(t) (2.5)

where l<~m<<n+ 1. Using (2.4b), we obtain

Vm= ~ 2m--~/~iei, l ~ < m ~ < n + l (2.6) i = 1

Consider now the matrix W~i defined as

Wu = 2~i 1, l<~i,j<~n (2.7)

and denote /~ie~= ~. The vectors ~, 1 ~< i~< n are n linearly independent vectors since the e~'s obviously are. Equation (2.6) can be rewritten as

Vm= ~ Wm,ke k (2 .8 k = l

The matrix W is a Vandermonde matrix whose determinant is I ~ j (2~-2/) and is, by assumption, nonvanishing. Consequently the n vectors Vl ..... vn are independent and spanned by el, e2 ..... en. Hence v,+~, which is also spanned by el ..... e , , can be written as

v n + l = 7 1 v l + 7 2 v 2 + +TnVn (2.9)

Consider now the following linear transformation, defined by

TVrn ~- Vm+ 1, l<~m<~n (2.10)

This is a linear transformation from the subspace spanned by vl,..., v, or el,..., en to itself. Using (2.6) and the linearity of T, we obtain

s ~ 2~flie~ (2.11) i ~ l i = 1

Page 5: An Efficient Method for Computing Leading

L a r g e A s y m m e t r i c M a t r i c e s 37

Hence

Te i= 2;e/ (2.12)

i.e., the 2i's are also eigenvalues of the operator T that acts in a finite (and small) dimensional space. Let

e ;= ~, go.Vj, l<~i,j<<.n (2.13) ] - I

be an expansion of the ei's in terms of the v/s. Then

Te;= ~ goTv/ (2.14) j = l

Define now the matrix D by

Tv;= D~vj, l<~i , j<~n (2.15)

where the (Einstein) summation convention over repeated indices is assumed. It follows from (2.10) that D ~ = @ + I for 1 < ~ i ~ n - 1 and from (2.9) that Dn, j = 7/D is just a representation of the T transformation

Upon substituting (2.12) and (2.13) in the left-hand side of (2.14), and (2.15) in the right-hand side of (2.14) we obtain

2i g:k V k = go-D/~ V K (2.16) Hence

2igi~ = go D/k (2.17)

i.e., the row g~,e (i fixed) is a left eigenvector of the matrix D corresponding to the eigenvalue 2i. Using the definition of the matrix D we can now proceed to find 2, and g~.e. Inspection of the matrix D for small n leads to the following result, which can be easily checked by substitution into (2.17):

k

g~,~= 2 ~7--m (2.18) m = 1 "

and 7m

L -- 1 (2.19) ,~:I 5~/+l-m

Equation (2.19) can also be rewritten as

7m2 m ~ - 2 7 = 0 (2.20) m = l

Page 6: An Efficient Method for Computing Leading

38 Goldhirsch, Orszag, and Maulik

Consequently, given the values of 7o, one can use (2.20) to compute the spectrum and (2.18) and (2.13) to compute the eigenvectors. It only remains to compute the values of •m- TO this end we perform an orthogonalization of the vectors Vl,..., vn leading to the orthonormal set w t,..., w, such that

i wi= ~ cijvj (2.21)

j=l

where the definition of the matrix e is obvious. Hence

Vn+l= ~ (Vn+l'W~k)Wk (2.22) k=l

and using (2.21)

Vn+l = ~, ~ (Vn+l.W~k)Ck m-I Vm (2,23) m=l k=l

Hence

?m = ~ (Vn+ 1' W~) Ck--,~ 1 (2.24) k=l

Equations (2.18), (2.20), and (2.24) constitute the sought solution for the eigenvalues and the eigenvectors.

All of the above formalism can be easily modified when the vectors Vm are defined by

Vm=e(m I)M~(/) (2.25)

where ~ is a chosen time interval. In this case, )~i in Eqs. (2.6) and (2.7) is to be replaced by e ~'~. This choice is especially convenient when the matrix M actually represents a differential operator (such as the Orr-Sommerfeld operator). In such a case the vectors Vm can be obtained by solving i = Mx for 0 ~< t ~< r with Vm_ 1 as the initial condition, i.e., solving an initial value problem for the differential operator. In this way one avoids storing a large matrix M representing the differential operator.

Finally, we note that in the above procedure the coefficients of ei for increasing values of i are increasingly damped by exponential factors. Moreover, the above method should work, strictly speaking, only for cases of nondegenerate spectra. A remedy to both of these problems is provided by a direct construction of the orthogonal vectors wi [cf. (2.3) and (2.21)].

Page 7: An Efficient Method for Computing Leading

Large Asymmetric Matrices 39

This is done by defining wl = ~(t) (2.26)

and i

Twi: 2 aikwi+ai, i+lWi+l, l <.i~n (2.27) k = l

where Tw~ = Mwi or Twi = eM~w i corresponding to ire, = )L~ei o r T e i : e~ei, respectively (which are the two alternatives presented above). The advan- tage of computing the vectors w i by the use of (2.27) is that it enhances the weight of nonleading eigenvectors (see Section 3 for details).

The net result of the formalism presented in this section is the con- struction of n linearly independent vectors which are spanned (to a good approximation) by the first n eigenvectors of M. Then either equation (2.20) or a representation of the operator T in the basis defined by {wi; 1 <<,i~m} can be used for the computation of the eigenvalues and eigenvectors. The resulting method is economial and simple to implement.

3. ANALYSIS OF THE METHOD

The present section is devoted to an analysis of the formalism developed in Section 2. More specifically, we investigate the error involved in using a finite number of vectors e i (or wi). In what follows we shall use the "exponential" version for the operator T: Tei = e~"~ei.

Firstly, we wish to investigate the extent to which the (right) eigenvec- tors, {ei}, of M are spanned by the vectors {wi) or by the vectors {v~}. Then we propose to estimate the error involved in the computation of the eigenvalues.

We wish to show that the eigenvectors ei can be written as follows: i R

e i = 2 aijwj+ 2 a~/e:~"'e: ( 3 . 1 ) j = 1 j = i + 1

where R is the rank of the matrix M, 2/i -= ,ij - 2i, t is the time of filtering or of initial integration, and the coefficients a~ are of order unity. The proof proceeds by induction. By construction we have

R

X ( t ) = ~ e;~'to~iei ( 3 . 2 )

i = l

w h e r e 0~ i are coefficients [see Eq. (2.1)]. Consequently, wl, being a nor- malized vector in the direction of x(t), can be written as

R

wl =N1 ~ e;~'lto~iei (3.3) i=l

Page 8: An Efficient Method for Computing Leading

40 Goldhirsch, Orszag, and Maulik

where the right-hand side of Eq. (3.2) was divided by e ~x' and normalized. N~ is a normalization factor. Hence

1 1 R el = - - wl - - - ~ e)'"'~iei (3.4)

N1~I (~1 i = 2

which shows that e~ indeed has the expected form (with an = 1/N~O~l and al,~ = - ~ / ~ l for i~>2). Next we prove that Eq. (3.2) is valid for i=2 . By definition

w2 = N2[v2 - (v2" w~) wl] (3.5)

where N 2 is a normalization factor. Hence

1 R - - w2 + (v2" w~) w~ = ~l e~'+~lel + ~2e~'2<'+~)e2 + ~ ~f"<'+~)ei (3.6) N 2 i = 3

Substituting el from Eq. (3.4) in the right-hand side of Eq. (3.6) we obtain

1 - - W 2 -[- (V 2 ' W ~ ) W 1 N2

1 , _ e2t( t + , ~_ _ _ C/q( t + z )Wl r)C/~21t(x2 e 2

N1 R R

-e;ml~+~) ~ e;'i'to:iei-l-~ -t- E ~i e;~i(t+~:)ei (3.7) i = 3 i = 3

By solving for e 2 in Eq. (3.7) it is easy to see that formula (3.1) is correct for i = 2. Assume next that Eq. (3.1) has been proven for 1 ~< i~< m. Define

~aij, l<~j<~i<~m (3.8) Ao= (0, m>~j>i>~l

~a~e #~', m>~j>i>~l (3.9) B a = [ 0 , l<~j<~i<~m

Both A and B are square matrices of rank m. Define also

Cij=~aue#i', j ~ > m + l ; l<~i<~m (3.10) (0, otherwise

By the induction assumption [-and using definitions (3.8)-(3.10) to rewrite Eq. (3.1)] we have

R

e i : ~ A i j W j "J[- ~ BoeJ+ 2 C~ej (3.11) j = l j = l j = l

Page 9: An Efficient Method for Computing Leading

Large Asymmetric Matrices 41

for 1 ~< i ~< m. Hence

e~= (I - B ) ; ~(Akjwj + Ckjej) (3.12)

where the summation convention is assumed. For every integer r

(Br ) i , k . . . . ~q "+ X""'+ ""+ ' (3.13) - - ai, il al l , i 2 air_ ~ ,k e i, z, t " ~ k d r - I t

o r

Consequently

B[k ~ e ~k'' (3.14)

(I - B)~ ~ = 3i~ + h~ke ~'ki' (3.15)

where h~ is O(1) and hik = 0 for i>~k or k > m [see Eq. (3.9)]. Substituting Eq. (3.15) in (3.12) we obtain

ei=Ao.wj+Cr i<~i<,m (3.16)

The first and third terms in the right-hand side of Eq. (3.16) are linear combinations of {wk; 1 <~k ~<m}. The second term is, using Eq. (3.10),

R

Y', a~f#~'ej (3.17) j = m + l

The fourth term is

E hikeF~kit E )'jk[ -- aKje e j - ~ hikakj e~te1 (3.18) k = i + l j = m + l j = m + l k 1

Consequently, by separating the terms containing {wj; 1 ~< j~< i} and those that are superpositions of {wj; i + 1 < ~ j ~ m } in Eq. (3.16) we obtain

, k R

ei= E 70w/+ E ;.~it �9 e hikAlcjwjq- E ;"t r~eJ ej (3.19) j = l k ~ i + l j ~ i + l j = m + l

where 7,j and r o. are O(1) quantities. Note that we used the fact that hik = 0 for i>~k or k > m . It follows from Eq. (3.19) that

ei = 7,jwj + oeX,,,wj + y , ji, rue ej (3.20) j = l j = i + l j = m + l

for 1 ~< i 4 m , where the definition of aq is obvious. Next we use Eq. (3.20) to complete the induction process. Equation (3.20) itself follows from the induction assumption for 1 < j < m.

Page 10: An Efficient Method for Computing Leading

42 Goldhirsch, Orszag, and Maulik

By its definition

W m + l = N m + l ~ Y m + l - -

where Nm+l is a normalization factor.

m + 1

V m + 1 ~ 2

i = 1

where t/m~(1)+ ,i" are O(1) numbers. Using

m + 1 m R

,.7(1) w - - ~ fli e2i(m'c)eiAr 2 ffrn + 1,i " i - -

i - - 1 i = 1 i = m + l

(v,.+l-w+) w~] (3.21) i = 1

Consequently

q{1) (3.22) rn + l,iWi

(2.4b) and (2.25) we obtain

~ie2i(mc)ei (3.23)

By assumption, the vectors ei; 1 <~i4m can be expressed by Eq. (3.20). Substituting Eq. (3.20) in Eq. (3.23) we obtain, after a rearrangement of terms,

m + l r n R R

0(2) w - ~ flie;'~(m~l ~ rije;'Yej+ ~ flie;'i(m~)ei (3.24) ~/m + 1,i " i - - i = I i = 1 j - - m + 1 i = m + 1

where ~(2/ are the new coefficients of the wi terms. Solving for e m +1 in ~/m + l , i

Eq. (3,24) we obtain [using Eq. (2.4c) for /~i]

em+l = qm+l,iWi + 2 ~ + 1,ke;k''+l'ex (3.25) i = l k--m+2

where qm+ 1j and s,,+ 1,~ are O(1) quantities [assuming e ~'j' is O(1)]. This completes the induction. Consequently Eq. (3.1) and Eq. (3.20) are correct for all i ~> 1.

It follows from Eq. (3.20) that the error involved in the assumption that ei is a combination of Wl, w2 ..... Wm is O(eXm+'"t), which means the choice of the size of the subspace of {wi; 1 <~i<<,m} should be such as to have IRe(2m + 1 - 2~)tl >> 1 for 1 ~< i ~< r if r correct eigenvectors are wanted. Notice that no gap in the spectrum is necessary for this estimate or the proposed method to be valid. Some of the eigenvectors (i.e., those for which ]Re(2~+l-2~)t[ is not large enough) will not be well approximated by the procedure. In such a case an appropriate increase of the value of m will ensure a good approximation for e~. A similar statement will be shown below to be true for the corresponding eigenvalues. Before we do that, we present a second result approximating the error involved in expressing the eigenvectors e~ in terms of the Vm'S [see Eqs. (2.8) and (2.9)3.

Page 11: An Efficient Method for Computing Leading

Large Asymmetric Matrices 43

We wish to show that for each 1 ~< i ~< R there is a linear combination of {re; 1 ~< i~< R}, which we denote by u~ and which satisfies

R

H i = e i + E ";v~t Ko.e ej (3.26) j = i + l

where K o. are O(1). As before this statement may be proven by straightforward induction. Here we shall only sketch a proof. For i = 1 define

1 R u l - c ~ e ~--47~' ~" c~Je;~J~eJ (3.27)

1 j = l

Hence R

U1 = e l + Y', ~ (3.28) j = 2 ~ 1

Next define u 1) as vk divided by 0~1Cs 1)r].

R

V~ l ) - -e l + E %e;~'~E'+Ik ltq (3.29) } = 2 ~1

Hence R

V~I)--Ul=- E ~J e#l'( e#l(/~ I)T j = 2 0~1

Dividing v~l)-Ul by (c~2/C~l)e )21' we obtain

- 1) %. (3.30)

Next define

R

v~ 2)= ~ C~Je;/2'(e;J'(k l ) ~ _ l ) e j j = 2 ~2

u 2 =-- v~2)/(e ;'2'~ -- 1)

(3.31)

or

u 2 = e 2 + ~,, ~ j = 3 ~2 e z21"c - 1

A similar procedure of Gauss elimination steps leads to

e 2";~j~z - 1 e # ~ -- 1 C 2221z - 1 e 2 2 1 z - 1

il 3 = e 3 + c)'J 3t _ _ C 2"~31v 1 e 231r- 1 ej j=4 e 2)'21z" - 1 e "~21T - 1

(3.32)

(3.33)

Page 12: An Efficient Method for Computing Leading

44 Goldhirsch, Orszag, and Maulik

The continuation of this process obviously leads to the desired equation (3.26). It is now easy to see that the e~ can be expressed in terms of {v~,..., Vm} with an error of O(e~+~.'~). Notice that when z itself is large the coefficient of ej in Eq. (3.33) tends to zero, which shows that one may obtain adequate approximations for large values of z and (even) short values of t.

Finally we turn to estimating the error in the eigenvalues, when com- puted using a finite number (n) of vectors. To this end define a matrix U by

U~ = w~Tw~, 1 <i, j<,n (3.34)

U is the reduction (or projection) of the operator T to the finite- dimensional subspace spanned by the v's (or w's). Let ~ be an eigenvector of U, with a corresponding eigenvalue A:

Uo~bj = A~b~ (3.35) j = l

Let R

e i = ~ bijwj, j = l

Then, since Te~ = ~'~' e ei, we have

Hence

and, using Eq. (3.34),

1 ~< i~< R (3.36)

R R

bijTwj=e ~ ~ bijwj (3.37) j = l j = l

R

Twi ~ 1, 1~,~ ,u (3.38) = Uik r~ ukjvv j j , k = i

R

U,j= ~ uikl" l~ZkTt,~ ~'kj (3.39) k = l

Hence, from Eqs. (3.35) and (3.39),

Define

R

ik ~ Uk jT" j - - j = l k = l

r = ~ bkj,kj j = l

(3.40)

(3.41)

Page 13: An Efficient Method for Computing Leading

Large Asymmetric Matrices

Hence

45

o r

Hence

b - l = A - 1 - A - I B A 1 + A - 1 B A - 1 B A - 1 "" (3.46)

Since Ak~i= 0 for k < i, by definition, we have to estimate only terms con- taining elements of the matrix B in (3.46). In this case

(A-~BA-~)~ A~ ~ ~ (3.47) = B i l i 2 A i 2 k

( A - 1 B A - I ) i k = ~ Ai~ 1 ~ - ~ all i~ e~'2" ~A i2~ ( 3.48 ) i I ~< i il < i2

The largest contribution of e;"2 ',` that is possible under the constraints k ~< i2 > il ~< i is for i 2 = k and i~ = i. A similar analysis holds for all terms in the sum Eq. (3.46). Hence

b~1=e;~k'~dik for i < k (3.49)

where d~k are O(1) numbers. When i >~ k, A,Z 1 is O(1) itself and we denote it by dik. Hence, from (3.44)

R i R R

i = r n + l k = l i = m + l k = i + l

(3.50)

R

Z bribikleJ'k~O~=AtPr (3.42) i - - 1 k = l

Since Z~= l bribi~ 1 = 6rk, we have

R R

e;"'~r + • Z br ibi~le~k~ = A~br (3.43) k = l i = m + l

By comparing Eq. (3.36) and Eq. (3.1) or (3.20) we observe that

b~j = fa~j, j-G< i (3.44) - ) q i t (ao.e , j > i

where the definition of ~ is obvious. Using Eqs. (3.8) and (3.9) with m = R and a 0 replaced by ~q in Eq. (3.9) we see that

b = A + B (3.45)

Page 14: An Efficient Method for Computing Leading

46 Goldhirsch, Orszag, and Maulik

Using (3.44) again (to substitute for b~)

R i R R 2 - 2,r t ).k Z arie dik e Ok + ~ 2

i = m + l k = l i ~ m + l k = i + l ri ~ ~tik~ ~ t[I k --

(3.51)

For fRe()~m+l--)~r)t ] >> 1 the leading-order terms in the double sums of Eq. (3.51) are eX~+l,r'a '~m+l din+ ~,ke~k~b k and e2'~+2'rt[1 d r,m+l k ~ l r,m+l m + 1 , m + 2 e~'~+2~Om + 2, respectively. Hence

(e ;'r~ - A ) ~b r + e;'"'"~'6r, m+ 1

r ) 1,ke O k + ~dm+~,m+2 V'm+ X \ / , . ~ I dm+ &r eXm+Z'm+l e~+2"/' 2 ~ 0 (3.52)

To lowest order (i.e., neglecting e x"+l,r~) an eigensolution satisfies A = e ;'~' and 0j = @. The next perturbative correction yields

A = e ~'r~ +e~+~,"6 .... + ldm+l , , e~r~ (3.53)

which indeed shows that for a large enough filtration time, the error in the eigenvalues is exponentially small provided Re ; t m + l ' t is well separated from (or much smaller than) Re 2r" t. Thus no gap in the spectrum is necessary to obtain excellent values for the leading eigenvalues. Obviously an increase in m will make the term e ;~+~,~' as small as needed.

4. THE A L G O R I T H M S

In this section we present detailed algorithms for the computation of 21,..., )o,, and el ..... %. The algorithm is described in steps:

(1) Choose an initial vector x o. It does not matter if x 0 is indepen- dent of el or other relevant eigenvectors. They will be introduced in step (2) by round-off errors during the filtration process.

(2) Solve the equation :~ = Mx, with x o as initial condition, up to a time t. Normalize the resulting x(t): x l=x( t ) / lx ( t ) l . Use now xl as an initial condition and solve for a time 0, obtaining Xl(0). Repeat this process r times to obtain

x(r0) (4.1) xr = Ix(r0)]

The reason one normalizes after each time t is to avoid dealing with large numbers [since Ix(t)l oce ;1' for large times]. The choice of r is explained below. Since it is not clear a priori whether Xo is independent of

Page 15: An Efficient Method for Computing Leading

Large Asymmetric Matrices 47

el, e2, etc., it is advisable to use low accuracy computations in the first few iterations. In this way the weight of el, e2, etc., will be amplified.

(3) Compute vk = Mk- lxn , 1 ~ k ~< m + 1. Orthonormalize v 1, v2 ..... Vm [using Householder transformations (Wilkinson, 1965), for exam- ple]. The resulting orthonormal vectors are denoted by w~, w2 ..... Wm- Following each step of the orthonormalization procedure use the test of step (4) to make sure v~ ..... Vm are independent. The matrix c, defined by

__ i w i - Z j = ~ ciivj, that results from the orthonormalization procedure should be kept.

(4) Test whether Vm+ 1 is spanned by Vl ..... v m. To do this compute

l I E , I = v n + 1 - - ~ , (Vm+l"~ri) wi /lYre+l[ (4.2) i = l

If the error E is larger than desired, increase m. If this process results in too large a value for m, go back to step (2) and do several additional iterations. Then repeat steps (3) and (4) until E is smaller than the desired accuracy.

(5) Once the matrix e and the vectors vk and wk are known, use (2.24) to find the 7k's. Alternatively solve the least-squares problem

. . . . _ ~2 m mmamlzmg the expression ][v,,,+~ t=~Viv~]l 2. Standard least-squares routines may be used to solve for the ~i's directly.

(6) Use (2.20) to find the spectrum.

(7) Use (2.18) to find the eigenvectors.

It should be stressed that in step (4) the test should be performed for m = 2, 3, etc., up to a desired m so as to make sure that v l,..., vm are indeed independent. It may happen that the test in step (4) results in a value of m that is smaller than its desired value. In this case, either use a shorter integration time t or the procedure described below. If spurious eigenvalues appear, they will be m dependent. A comparison of the eigenvalues for dif- ferent values of m enables the identification of the nonspurious eigenvalues.

The reason the procedure described above may converge for a value of m (say m~) that is smaller than the desired (number of eigenvalues) m can be a gap in the spectrum at m l, namely,

Re)~l > R e 22"" Re 2m~>Re 2m~+t > ' " (4.3)

In this case, )~, 22 ..... 2,~ and the corresponding eigenvectors are obtained to an exponential accuracy, which is demonstrated in the next section. Therefore it is possible to obtain the m~ left eigenvectors e~ corresponding to ei, 1 ~< i<~ml. These vectors satisfy

eL "ej = 6 o (4.4)

Page 16: An Efficient Method for Computing Leading

48 Goldhirsch, Orszag, and Maulik

Expanding ej in the orthonormal set w [see Eq. (2.21)] we obtain

(e r . w~)(w~-ej) = 6~j (4.5)

Thus (e~- wk) is the inverse of the (known) matrix A (of rank ml):

Qki = (w2ej) (4.6)

Consequently

m l

e~= Z Qffl"w} (4.7) j = l

Consider now a vector Xo that is chosen to be orthogonal to e L 1 ~< i ~< m. Now perform step (2) with the modification that after each time 0, x~(t) is orthogonalized to w~ ..... Wm~. The resulting vector, after a time t, is

11 ml ] y ( t ) = f l eg~Xo - ~ [(eMtXo)'W~]W i (4.8) i = 1

where p is a normalization factor introduced in order to ensure ]Yl = 1. Since x has been orthonormalized to w~ it follows that

m l

x = E eiei + Z ctie i (4.9) i = 1 i > m l

where e~ are exponentially small quantities. Substituting (4.9) into (4.8) results in

1 2 y(t) =pl ,~l=m* e;"te,[ e , - (e,w~) wj] + to i>ml e~ne~[e~- (e~ "w~) wj] (4.10)

The difference [ e i - ( e iwj + )wj] is exponentially small for the exact ei's. So is (eiw~) for i>m 1. Thus, for t not too large, Y(t)~:~i>ml e~ite~ and the regular procedure can be applied to y(t) to find 2ml, 2,~+1, etc. If 2ml is close to 2m~+l a relatively large time t in (4.10) may be necessary. In this case a better accuracy for e i, 1 ~< i ~< m I should be obtained first. Methods to do so will be given in a future publication.

Another version of the suggested algorithm which is highly suitable when no gap in the spectrum exists is based on Eqs. (2.25)-(2.26). Step (2) involves then an orthogonalization after each integration for a time of r. The projection of the operator T (in the subspace spanned by the w's) can then be expressed as shown in Section 3 [Eq. (3.34)]. The small resulting m x m matrix can be diagonalized by LR/QR methods (Wilkinson, 1965)

Page 17: An Efficient Method for Computing Leading

Large Asymmetric Matrices 49

(we have used EIGCG1, an EISPACK routine) yielding exponentially accurate results.

We summarize this section by mentioning that the above method can be efficiently used both when the available computer has a small memory (then increase TI) and when it has a large one (then more vectors and eigenvalues can be computed).

5. IMPLEMENTATION OF THE METHOD: THE ORR-SOMMERFELD EQUATION

In this section we present an application of the methods described above to a problem in hydrodynamical stability: the spectrum of the Orr-Sommerfeld equation for channel flow. This equation has an infinite number of degrees of freedom and there is no "gap" in its spectrum. Thus it is a good test case for our methods. Moreover, the existence of previously computed, highly accurate values of the spectrum enables us to perform a comparison of our results with the known spectrum (Orszag, 1971).

The Orr-Sommerfeld equation (Drazin and Reid, 1981) reads

( U - c ) (D 2 - ~2)th - U"O = .1 (D 2 _ ~2)2 to~ K (5.1)

~b(x, y, t) = O(y) exp[ie(x - c t ) ]

where U ( y ) for - 1 ~< y ~< 1 is the basic velocity profile, ~b(x, y, t) is the per- turbation stream function, R is the Reynolds number, and D is the cross- stream derivative d/dy. The real number ~ represents the wave number of the streamwise periodic perturbation, and c is the sought complex eigen- value. The real part of the eigenvalue Cr is the phase speed of the pertur- bation. The growth rate of the perturbation is exp(~cit), where c i= Im c. Thus, a perturbation ~, is stable if ci < 0 and unstable if ci > 0. The boun- dary conditions on Eq. (5.1) are

~,(_+ 1)= D~,(_+ 1) =o (5.1b)

The above is an eigenvalue problem. It is transformed to a time-depen- dent problem (from which it is actually derived) by redefining the distur- bance stream function ~bas

q,(x , y , t) = ~,(y , t) e '~x

even though the equation is separable in time. We thus solve the following initial value problem:

c~(D 2 _ ~2)~ h + i ~ U ( D 2 _ ~2)~ _ i~U"t~ = R I I ( D 2 - 0~2) 2 [/] (5.2a,b)

~(_+ 1)=D~h(_+I)=0

854/2/1-4

Page 18: An Efficient Method for Computing Leading

50 Goldhirsch, Orszag, and Mau|ik

It is convenient to define = (D 2 -- ~2)0

~ = p + i 2

O=O+iz to obtain the coupled real system

0,p - cr + ccU"z

O ,2 + ccUp - ccU"O

( D 2 _ ~2)~

(D 2 - 0~2)Z

with boundary conditions

(5.3)

(5.4a)

(5.4b)

= R - 1 ( D 2 - c~2)p (5.5a)

= R - 1 ( D 2 - cd)2 (5.5b)

= p (5.5c)

= )~ (5.5d)

r 1) = Z(+ 1) = DO(_+_ 1) = DZ(+ 1) = 0 (5.6)

The real system may be formally written as ~ = M 0 , M being an integrodifferential operator.

These equations have been solved by the Chebycheff pseudospectral technique (see the Appendix for details). An initial filtering time Tj was used to produce a function 0(y, t), which is spanned by a relatively small number of eigenvectors corresponding to leading eigenvalues Re(-iccc) or the fastest growing modes, as has been explained previously.

In all computations results have been presented for the parameter values cr = 1.0 and R = 10,000. The base velocity profile U is (1 - y2).

The results of our computations using the methods developed are presented in five tables. A sixth table of eigenvalues from Orszag (t971) is taken as standard and used for comparing the accuracy of our results. In all cases we obtain several leading eigenvalues to a very high accuracy.

In our computations we vary the filtration time Tf, the sampling inter- val z, the accuracy of the time integration as expressed by the discrete time- step size during filtration 6tf and during the sampling 3t~ in order to elucidate their effect on the accuracy of the computed spectra and the possible generation of spurious modes. We have also investigated the influence of the initial stream function ~/,(y, 0) and of the number of eigen- values determined on the accuracy of the results.

Three methods have been used for obtaining the eigenvalues, as explained below. Table I presents results of two runs with a sampling time/delay time of 15 and an initial filtration of 50. The number of correct eigenvalues obtained using Eq. (2.20) is 5 when the number of vectors is 8 and 4 when n = 16. The reason for this is the fact that the filtration time is too large to produce more eigenvalues accurately. For the case n = 16 we also obtain two unstable spurious modes. This is due to the sensitivity of

Page 19: An Efficient Method for Computing Leading

Large Asymmetric Matrices 51

Table I a

Tr = 50, z = 15, 6(r = 0.05, 6t~ = 0.005 TI= 50, z = 15, & t = 0.05, ~t~ =0 .005

Identified Identified m o d e number Eigenvalue ci m o d e number Eigenvalue ci

1 3 .73967060--03

2 - 3 . 5 1 6 0 0 6 7 9 - 0 2

3 - 3 . 5 2 0 7 6 0 4 5 - 0 2

4 - 5 . 0 8 9 8 7 8 2 8 - 0 2

5 - 6 . 3 1 5 0 4 2 4 9 - 0 2

* - 8 . 7 2 1 5 8 0 1 0 - 0 2

* - 1 . 2 4 7 4 0 8 0 8 - 0 1

* -1 . 32275383 - 0 1

1 3 . 7 3 9 6 7 0 6 0 - 0 3

* 4 .58986340--04

* 3 . 4 0 8 9 8 7 1 4 - 0 4

* - 3 . 0 2 6 5 8 2 8 2 - 0 3

* - 1 .49838951--02

* - 2 . 3 6 4 9 2 6 0 4 - - 0 2

* - 2 . 9 3 6 1 1 6 6 5 - - 0 2

2 - 3 . 5 1 4 2 3 7 3 1 - 0 2

3 - 3 . 5 2 7 9 3 8 4 2 - 0 2

4 - 5 . 0 8 9 7 1 6 4 4 - 0 2

* 1 .09428509-01

* 1 .23518682-01

* - 1 .48108621-01

* - 1 . 8 4 8 2 5 8 9 9 - 0 1

~ Here * indicates a m o d e not identified by the present method.

the roots of the characteristic polynomial to small inaccuracies in the coef- ficients. The dominant spurious eigenvalues may be easily identified by their appearance as n, the number of vectors, is increased. The initial con- dition for Table I is ( 1 - y g ) 2 + Z ~ = i 0 i , where 0i are eigenvectors corresponding to the fastest growing eight eigenmodes.

Table II was obtained with an initial condition of ~, = U2(1 + y). The

Table II

Tj-= 50, r = 20, 6(r= 0.05, 6t, = 0.005 T j = 75, r = 20, 6tl = 0.075, 6t~ = 0.005

Identified Identified m o d e number Eigenvalue c i m o d e number Eigenvalue ci

! 3 .73967054--03 1 3.73967055 --03

2 - -3 .51921344- -02 2 --3.51630043 - -02

4 - -5 .09056673- -02 3 - -3 .67725906- -02

* - -5 .52082215- -02 4 - -5 .08973087- -02

* - -7 .13017236- -02 5 - -6 .18479421- -02

* - -1 .32670178--01 * -- 8 .98244690--02

* --1.47519731 --01 * - -1 .25561448--01

* - -1 .64393611--01 * 1.90016840--01

854/2/1-4"

Page 20: An Efficient Method for Computing Leading

52 Goldhirsch, Orszag, and Maulik

factor (1 + y) is introduced to ensure that the initial condition has both an even and an odd part as U 2 is even in y. As the filtration time is increased from T i = 50 to T s = 75 the number of accurate eigenvalues rises to 5. The longer filtration time allows for a better damping of the decaying modes not of interest. It is possible that the cruder time step in the second case, 6ty= 0.075 as opposed to 3tf= 0.05, may introduce modes independent of the initial condition by way of numerical noise.

In Table III the vectors Vm were generated from the initial condition U2(1 +0 .71y) after a filtration of TF=75 and a delay interval of ~ = 2 0 . However, instead of using Eq. (2.20) for finding the eigenvalues, we have employed the LR algorithm as implemented in the EISPACK routine EIGCG1 to calculate the eigenvalues of the matrix directly. Four eigenvalues are obtained accurately both for n = 8 and n = 16 vectors.

Tables IV and V were produced using the method of orthogonalization described at the end of Section 2 [-see Eqs. (2.25) and (2.26)]. The eigenvalues of the reduced matrix obtained were again com- puted using EIGCG1. In Table IV an increase of iCr from 50 to 150 leads to an increase in the number of accurate eigenvalues from 3 to 5 for n = 8 vec- tors. The improved accuracy using this last method is obtained, as explained in Section 3, by the amplification of the weight of the nonleading

Table I I I

T j = 75, r = 20, 6tl= 0.075, 6t~ = 0.005 T i = 50, r = 20, 6 t s = 0 . 0 5 , 6t: = 0 . 0 0 5

Identified Identified mode number Eigenvalue ci mode number Eigenvalue c~

1 3 . 7 3 9 6 7 0 6 1 E - 0 3

2 - 3 . 5 1 3 1 3 3 1 0 E - 0 2

3 - 3 . 7 7 3 8 2 7 9 9 E - 0 2

4 - 5 . 0 8 9 3 9 1 0 2 E - 0 2

5 6 . 0 8 6 5 4 6 6 0 E - 0 2

* - 8 . 9 4 1 6 4 9 0 8 E - - 0 2

* - - 1 . 2 3 5 3 7 5 7 3 E - 0 1

* - 1 . 8 3 2 9 5 6 7 6 E - 0 1

�9 5 . 0 4 2 1 7 6 8 9 E - 0 3

1 3 . 7 3 9 6 7 0 6 0 E - 0 3

�9 - 9 . 4 8 6 5 7 9 8 7 E - 0 4

�9 - 1 . 1 5 4 7 2 8 5 7 E - 0 2

�9 - 1 . 5 5 8 7 0 0 1 4 E - 0 2

�9 - - 2 . 3 0 3 0 4 1 8 3 E - 0 2

2 - 3 . 4 0 3 2 8 5 6 0 E - 0 2

3 - 3 . 5 1 9 0 5 2 7 6 E - - 0 2

4 - 5 . 0 9 1 8 8 5 7 7 E - 0 2

�9 - 5 . 1 2 1 1 9 0 7 4 E - - 0 2

5 - 6 . 2 7 0 1 4 5 5 9 E - - 0 2

7 - 9 . 9 3 4 3 8 2 3 1 E - - 0 2

9 - - 1 . 1 1 8 7 5 3 7 6 E - - 0 1

10 - 1 . 1 1 9 4 1 4 1 6 E - 0 1

�9 - 1 . 1 7 7 8 3 4 6 5 E - 0 1

, - 1 . 7 0 3 1 3 2 6 2 E - 0 1

Page 21: An Efficient Method for Computing Leading

Large Asymmetric Matrices 53

Table IV

/~r = 50, z = 20 fit/= 0.05, c~t~ = 0.005

Tf= 75, z - 20 3tj = 0.075, bt; = 0.005

T: = 150, r = 20 3tl. = 0.075, c~t; = 0.005

Identified Identified Identified mode mode mode

number Eigenvalue ci number Eigenvalue ci number Eigenvalue c~

1 3.73967060E-03 1 3.73967060E-03 1 3.73967060E-03 2 -3.51932116E-02 2 -3.51137133E-02 2 -3.51723718E--02 4 - 5.09053371E--02 3 -3.74768214E-02 3 -3.51870632E-02 �9 - 5.48854885E--02 4 - 5.08944600E--02 4 - 5.08987622E-02 �9 7.36915731E-02 5 - 6.07586848E-02 5 - 6.30009355E- 02 , - 1.38156654E-01 7 -9.17031242E-02 * - 1.37863162E-01 �9 - 1.41371117E-01 * - 1.23567838E-01 * - 1.43432356E-01 �9 - 1.58274564E-01 , - 1.82561894E-01 , - 1.74728812E-01

e igenvectors in the o r t h o g o n a l i z a t i o n procedure . Indeed , as seen in T a b l e V, the n u m b e r of accura te e igenva lues w h e n n = 32 vectors is used is

13. The m e t h o d of o r t h o g o n a l i z a t i o n p roduces 12 accura te e igenvalues for

n = 32 even w h e n n o f i l t ra t ion is invoked . In all cases we have o b t a i n e d excel lent accuracy for the l ead ing e igenvalue . The s u b d o m i n a n t e igenva lues

c o r r e s p o n d i n g to a n t i s y m m e t r i c m o d e s t end to be s o m e w h a t m o r e accura te

t h a n e igenva lues c o r r e s p o n d i n g to s y mmet r i c modes .

6. S U M M A R Y A N D C O N C L U S I O N S

W e have s h o w n h o w one can o b t a i n l ead ing e igenvalues a n d eigenvec-

to ts for large a s y m m e t r i c mat r i ces u s ing a re la t ively s imple a n d e c o n o m i c a l

n u m e r i c a l scheme. W e have also s h o w n h o w such a m e t h o d can be app l ied w h e n the ma t r ix to be d i agona l i zed is a differential opera tor . In the la t te r case ou r m e t h o d does n o t requi re s torage of a n effective mat r ix , which

represents the differential o p e r a t o r o n a comple t e basis of e x p a n s i o n

funct ions . Three va r i an t s of the m e t h o d were tested: (a) a direct a p p l i c a t i o n of

the fo rma l i sm p resen ted in Sec t ion 2; (b) a c o n s t r u c t i o n of a set of vectors as exp la ined in Sect ion 2 a n d the assoc ia ted r educed mat r ix , fo l lowed by a d i a g o n a l i z a t i o n us ing s t a n d a r d e igenva lue rou t ines for genera l mat r ices ; (c) direct c o n s t r u c t i o n of a set of o r t h o g o n a l vectors as at the end of Sec- t ion 2 fol lowed by a d i a g o n a l i z a t i o n of the reduced matr ix .

The th i rd v a r i a n t seems to be the supe r io r m e thod . The m e t h o d was

f o u n d to be very r o b u s t a n d d id n o t requi re fine t u n i n g to i m p r o v e the

Page 22: An Efficient Method for Computing Leading

54 G o l d h i r s c h , O r s z a g , a n d M a u l i k

accuracy of the calculated eigenvalues. Nor does it produce spurious dominant eigenvalues. The accuracy of the leading eigenvalues may be further increased by considering larger reduced matrices, that is, by increas- ing the number of orthogonal eigenvectors. The errors involved in the proposed algorithm are analyzed in detail in Section 3 and are in good agreement with our numerical results.

T a b l e V

TU = 75, r = 2 0 T j = 50, z = 2 0 Tu=O, z = 20

6 t / = 0.075, 6t~ = 0.005 6 ( r = 0.05, 6t~ = 0.005 6(r = 6t~ = 0.005

I d e n t i f i e d I d e n t i f i e d I d e n t i f i e d

m o d e m o d e m o d e

n u m b e r E i g e n v a l u e ci n u m b e r E i g e n v a l u e ci n u m b e r E i g e n v a l u e ci

1 3 . 7 3 9 8 2 6 0 8 E - 0 3 1 3 . 7 3 9 6 7 0 6 0 E - 0 3

2 - 3 . 5 1 1 2 7 7 9 6 E - 0 2 2 - 3 . 5 1 6 7 2 2 2 5 E - 0 2

3 - 3 . 5 1 3 2 0 8 3 6 E - 0 2 3 - 3 . 5 1 8 6 5 2 8 7 E - 0 2

4 - 5 . 0 8 9 8 4 8 4 2 E - 0 2 4 - 5 . 0 8 9 8 7 3 0 5 E - 0 2

5 - - 6 . 3 1 5 1 7 6 9 8 E - 0 2 5 - 6 . 3 2 0 1 4 4 4 2 E - 0 2

6 - 6 . 3 2 0 1 8 4 2 0 E - 0 2 6 - 6 . 3 2 5 1 5 2 1 9 E - 0 2

7 - 9 . 1 1 7 7 2 5 4 1 E - 0 2 7 - 9 . 1 2 2 2 0 3 4 4 E - 0 2

8 - 9 . 1 2 6 8 7 3 2 3 E - 0 2 8 - 9 . 1 3 1 3 4 8 3 2 E - 0 2

9 - 1 . 1 9 1 6 1 9 8 3 E - 0 1 9 - 1 . 1 9 2 2 3 2 8 8 E - 0 1

10 1 . 1 9 3 4 2 0 4 2 E - 0 1 10 - ! . 1 9 3 8 0 2 2 4 E - 0 1

* - 1 . 2 1 1 7 0 9 3 6 E - 0 1 11 - 1 . 2 4 5 0 0 8 1 0 E - 0 1

11 - 1 . 2 4 5 0 0 4 2 5 E - 0 1 12 - ! . 3 8 2 2 4 6 3 5 E - 0 1

12 - 1 . 3 8 2 2 3 8 9 7 E - 0 1 13 - 1 . 4 5 4 5 0 6 7 1 E - 0 1

* - 1 . 4 4 4 6 7 8 2 9 E - 0 1 * 1 . 5 1 0 6 3 4 1 1 E - 0 1

13 - 1 . 4 5 6 0 9 2 3 9 E - 0 1 * - 1 . 5 8 1 5 8 1 7 3 E - 0 1

14 1 . 4 6 4 4 8 0 7 5 E - 0 1 * - 1 . 6 0 7 1 7 2 8 5 E - 0 1

15 - 1 . 7 5 0 7 8 4 3 0 E - 0 1 * - 1 . 6 1 1 2 4 1 5 5 E - 0 1

17 - 1 . 8 1 7 1 9 3 5 5 E - 0 1 * - 1 . 6 4 8 7 5 0 0 2 E - - 0 1

* - ! . 9 7 9 6 6 1 2 9 E - 0 1 * - 1 . 6 7 4 1 2 7 8 1 E - - 0 1

18 - 2 . 0 4 6 5 8 2 8 9 E - 0 1 * - 1 . 7 2 6 4 9 3 1 1 E - 0 1

21 - 2 . 0 7 8 4 4 6 9 9 E - 0 1 15 - 1 . 7 5 1 8 8 8 2 5 E - 0 1

* - 2 . 2 0 7 4 2 0 7 9 E - 0 1 * - 1 . 7 8 1 9 2 3 2 8 E - 0 1

* - 2 . 2 2 0 4 1 7 2 1 E - - 0 1 * - 1 . 8 1 3 0 9 6 6 6 E - 0 1

* - 2 . 6 7 9 4 8 1 7 7 E - - 0 1 * - 1 . 8 2 0 1 7 1 9 1 E - 0 1

17 - 1 . 8 2 7 7 6 8 7 4 E - 0 1

* - 1 . 9 2 6 6 6 1 3 9 E - 0 1

* - 1 . 9 8 3 2 6 2 5 5 E - 0 1

, - 2 . 0 2 2 3 3 3 0 1 E - 0 1

, - - 2 . 2 0 5 0 0 8 1 4 E - 0 1

* - - 2 . 3 6 2 7 0 7 6 2 E - 0 1

, - - 3 . 6 2 4 6 3 0 1 8 E - 0 1

, - 3 . 8 8 1 7 3 3 4 4 E - 0 1

1 3 . 7 3 9 6 7 0 6 1 E - 0 3

2 - 3 . 5 1 6 7 2 2 2 5 E - 0 2

3 - 3 . 5 1 8 6 5 2 8 7 E - 0 2

4 - 5 . 0 8 9 8 7 3 0 5 E - 0 2

5 - 6 . 3 2 0 1 4 4 4 2 E - 0 2

6 - 6 . 3 2 5 1 5 2 1 9 E - 0 2

7 - 9 . 1 2 2 2 0 3 4 4 E - 0 2

8 - 9 . 1 3 1 3 4 8 3 2 E - 0 2

9 - 1 . 1 9 2 2 3 2 9 0 E - 0 1

10 - 1 . 1 9 3 8 0 2 2 6 E - 0 1

11 - 1 . 2 4 5 0 0 8 1 0 E - 0 1

12 - 1 . 3 8 2 2 4 6 3 5 E - 0 1

13 - 1 . 4 5 4 5 3 3 1 8 E - 0 1

* - 1 . 5 1 1 6 1 9 4 3 E - 0 1

, - 1 . 5 8 0 5 4 7 5 6 E - 0 1

* - 1 . 6 0 7 1 1 6 9 9 E - 0 1

* - 1 . 6 1 2 5 9 7 3 6 E - 0 1

* - 1 . 6 4 7 3 3 1 2 4 E - 0 l

, - 1 . 6 7 4 0 4 3 1 9 E - 0 1

* - 1 . 7 2 2 1 8 4 8 3 E - 0 1

* - 1 . 7 4 6 9 5 0 0 7 E - 0 1

* - 1 . 8 0 6 7 6 4 4 3 E - 0 1

17 - 1 . 8 2 3 1 9 2 2 1 E - 0 1

, - 1 . 8 3 1 5 9 3 0 5 E - 0 1

* - 1 . 8 9 7 8 0 9 4 3 E - 0 1

* - - 1 . 9 5 4 8 1 1 1 1 E - 0 1

* - 2 . 0 1 6 5 2 7 4 0 E - 0 1

, - 2 . 0 2 2 4 9 4 3 7 E - 0 1

, - 2 . 0 6 2 2 2 8 8 1 E - 0 1

, - 2 . 3 4 0 8 4 8 4 2 E - 0 1

, - 2 . 6 5 0 9 8 0 2 8 E - 0 1

* - 3 . 1 1 3 4 2 7 5 0 E - 0 1

Page 23: An Efficient Method for Computing Leading

Large Asymmetric Matrices 55

When a large matrix, of rank R, is considered, the number of operations necessary to find its eigenvalues, using standard methods, scales like R 3. In our method, the number of operations is proportional to the rank R, the number of vectors used, m, and the time of filtration tF, i.e., it is Rnt z. The time of filtration depends on the spectrum, as explained in Sec- tion 3. If one wishes to obtain k correct eigenvalues one needs

le(~m§ ~ 1

Thus the value of (s of the order of 1/12,~+1-2kl.

Table VI. Least Stable Eigenvalues for ~ = 1, R = 10000

Mode number Eigenvalue cl

1 +0.00373967 2 -0.03516728 3 -0.03518658 4 -0.05089873 5 -0.06320150 6 -0.06325157 7 -0.09122274 8 -0.09131286 9 -0.11923285

10 -0.11937073 11 -0.12450198 12 -0.13822652 13 -0.1472339 14 -0.1474256 15 -0.1752287 16 -0.1754781 17 -0.1828219 18 -0.203221 19 -0.203529 20 -0.206465 21 -0.208731 22 -0.23119 23 -0.23159 24 -0.23882 25 -0.25872 26 -0.25969 27 -0.25988 28 -0.26511 29 -0.26716 30 -0.28551 31 -0.28663 32 -0.28765

Page 24: An Efficient Method for Computing Leading

56 Goldhirsch, Orszag, and Maulik

Future applications of this method could include complex and non- Newtonian hydrodynamical stability problems, lattice eigenvalue problems, and other systems leading to large asymmetric matrices for which the dominant eigenvalues are of interest and standard methods are either slowly converging or the memory requirements are prohibitively large. We believe that this method coupled to acceleration techniques may enable one to tackle many interesting eigenvalue problems.

A C K N O W L E D G M E N T S

This work was partially supported by the Israel U.S. Binational Science Foundation and the Office of Naval Research under contracts Nos. N00014-82-C-0451 and N00014-85-K-0201 and the Air Force Office of Scientific Research under contract No. F49620-85-C-0026.

A P P E N D I X

In this Appendix we present some details concerning the numerical solution of Eq. (5.4). We have employed a pseudospectral technique based on the expansion of ~ in Chebyshev polynomials, which gives a good resolution of boundary and critical layers. A third-order Adams-Bashforth time-stepping scheme was used for an explicit evaluation of the advection (variable coefficient) terms and a second-order Crank-Nicholson scheme for the linear diffusion terms. The time discretized equations read

~ U , ~ ( n ) 1) )L(n 2) 12

23 "n) 16 5 ,n-z)] 'n ]

1 = 2--R (D2- c~2)(P("+ t ) pc,)) (A.1)

and

)L(n+l)--)~(n) 123p(n) 16 (n l) ~2 1 At + ctU -~ - - i~ p + p(~ 2)

1 (D2 _ ~2)[,~(. + 1) + ) (n)] 2R

(A.2)

Page 25: An Efficient Method for Computing Leading

Large Asymmetric M a t r i c e s 5 7

Equations (A.1) and (A.2) can be rearranged as follows:

(D2 _ ~x2 _ 2~t ) p(,,+ 1)_-m_l . . . . , , , - 2)

( D 2 - c d - 2R ) ) ( . + I) = F ~ . , . i , . , - 2)

(A.3)

F1 and F2 being found from (A.1) and (A.2). The functions q5 and Z satisfy [see Eqs. (5.3) and (5.5)]

(D 2 - c d ) {b (~+ ~) = p ( ' + 13

(D2 _ c~2) Z(" + 1) = 2(~ + 1) (A.4)

Since the boundary conditions are on ~ = ~b + i X alone, we have used the following Green's function technique to satisfy these conditions. First, we find solutions to (A.3) satisfying p("+1)(+1)=2{~+~)(_+1)=0. We call these solutions ,,( '+ 1) and ~(" +13 These solutions are then substituted in /'" h o m " ~ h o m �9

the right-hand side of Eqs. (A.4) and solved using the (given) boundary conditions ~b(_+ 1 ) = Z ( + 1)=0. We call these solutions ,~"+ l and Z~+~ - - V ' h o m

respectively. They do not necessarily satisfy the Neumann boundary con- ditions Dr + 1 ) = DZ( + 1 ) = 0. Next we solve the homogeneous equation

( ( D 2 - ~ 2 - ~ - 7 p = 0 , D 2 - ~ 2 - ~ - - 7- 2 = 0 (A.5)

using boundary conditions p (1 )= 1, p ( - 1 ) = 0 and 2(1)= 1, 2 ( - 1 ) = 0 , respectively. These solutions are called p+ and 2+. Similarly we find p_ and 2_ , which satisfy (A.5) with p _ ( l ) = 0 , p _ ( 0 ) = l and 2 (1 )=0 , 2_(0) = 1. Subsequently Eqs. (A.4) are solved with p+ , p_ and 2+, 2 on the right-hand side yielding q5+ and )~+, respectively [with boundary con- ditions ~b • ( _+ 1 ) = ;g _+ ( _+ 1 ) = 0 as required]. The general solution for q5{" + 13 and )(n+l) can now be written

q 5(n+i)=vhomA(~+l)+a+~+ +a_O

)t(n+ 1) = Ahom~'(n+l)-l- b+ Z + - -I-b )(_ (A.6)

The constants a_+ and b + are determined by imposing the Neumann boun- dary conditions D~b ('+ 1)( + 1 ) = DX (n + 13( _+ 1) = 0, thus yielding a solution r + ix~+l that satisfies all four boundary conditions. The solutions q~_+ and Z+ need only be computed once in a preprocessing step, thus necessitating only two Poisson solvers per time step. The above leads to a very efficient time integration for the initial value problem.

Page 26: An Efficient Method for Computing Leading

58 Goldhirsch, Orszag, and Maulik

R E F E R E N C E S

Arnoldi, W. E. (1951). Q. Appl. Math. 9, 17. Bennetin, G., Galgani, L, Giorgilli, A., and Strelcyn, J. M. (1980). Meccanica 15(9), 21. Drazin, P. G., and Reid, W. H. (1981). The Theory of Hydrodynamic Stability, Cambridge

University Press, New York. Goldhirsch, I., Sulem, P. L., and Orszag, S. A. (1987). Stability and Lyapunov stability of

dynamical systems: A differential approach and a numerical method, Physica D (in press). Jenning, A., and Stewart, J. (1975). J. Inst. Math. Appl. 15, 351. Lewis, J. G. (1977). Algorithms for sparse matrix eigenvalue problems, Ph.D. thesis, Stanford

University report No. 77-595. Orszag, S. A. (1971). J. Fluid Mech. 50, 689-703. Parlett, B. N. (1980). The Symmetric Eigenvalue Problem, Prentice-Hall, Englewood Cliffs,

New Jersey. Saad, Y. (1980). Linear algebra and its applications 34, 269 (and references therein). Shimada, I., and Nagashima, T. (1979). Prog. Theor. Phys. 61, 1605. Stanley, H. E. (1971). Introduction to Phase-Transition and Critical Phenomena, Oxford

University Press, New York. Wilkinson, J. H. (1965). The Algebraic Eigenvalue Problem, Clarendon, Oxford University

Press.