29
On Accelerating Monte Carlo Techniques for Solving Large Systems of Equations TR96-041 1996 John H. Halton Department of Computer Science University of North Carolina at Chapel Hill Chapel Hill, NC 27599-3175 UNC is an Equal Opportunity/Affirmative Action Institution.

On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

On Accelerating Monte Carlo

Techniques for Solving Large Systems

of Equations

TR96-041 1996

John H. Halton

Department of Computer Science University of North Carolina at Chapel Hill

Chapel Hill, NC 27599-3175

UNC is an Equal Opportunity/ Affirmative Action Institution.

Page 2: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

John H. Halton The University of North Carolina at Chapel Hill

1. INTRODUCTION

This paper is concerned with ways of incorporating current Monte Carlo techniques for solving large linear systems [hereinafter referred-to as "plain Monte Carlo"-PMC] in accelerative schemes and other numerical techniques for more rapidly solving both linear and nonlinear systems.

Let ~ be a system of equations, linear or nonlinear, whose solution is an m-dimensional vector x. Write

x = y+z, (1.1)

where y is an estimate ofx and z is the corresponding correction. Call

8 = llx-yll = llzll = =

(1.2)

the error1 in the estimate y. Let

1L = 1L(~, y) (1.3)

be a linear problem, based on~ and the estimate y, whose solution is z. Let

Jff = .A(1L, Z) (1.4)

be an algorithm which generates and solves 1L to yield an estimate Z of z. We can then take y + Z as an improved estimate ofx. While the algorithm .//{ may be deterministic or stochastic, we shall think particularly ofthe case

Write L1 = llz-ZII

(1.5)

(1.6) =

for the error in Z.

1

Now suppose that we begin with the problem ~ and an initial

Here we choose to have the norm of any m-dimensional vector v = (v 1, v2, ... , vm)

to be the maximum (or L =) norm, II v t = maxl:5j:5m I Vj I.

-1-

Page 3: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

estimate y<Ol of its solution x, and we set out to solve ~iteratively, by successively using the family of algorithms

to generate and solve the corresponding linear problems

J[r = JL(~, y<rl).

At each iteration,2 X = y(r) + z(r)

with3 8 = llx- y(r)ll = II z(r) II ' r ~ =

and we put y<r+ll = y<rl + zCrl,

with4 L1r = llz(r)- zCrlt,

and cycle through (1.7) and (1.8) until the error in the estimate y<rl,

Clearly, by (1.9) and (1.11),

z(r+l) = X_ y(r+l) = X_ y(r) _ z(r) =

whence, by (1.10) and (1.12),

z(r)- z(r)

or+l = llzCr+l)t = llz(r)- zCr)t = L1r-

2

3

4

Compare (1.1).

See (1.2).

See (1.6).

-2-

'

(1.7)

(1.8)

(1.9)

(1.10)

(1.11)

(1.12)

(1.13)

(1.14)

(1.15)

Page 4: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

2. SEQUENTIAL MONTE CARLO FOR LINEAR SYSTEMS

One example of this kind of process is the sequential Monte Carlo­SMC-method, in which the problem $ is itself linear, of the form

x = a+Hx,

the initial iterate is y(O) = 0,

so that, by (1.9),

and the initial linear problem 1L0 = 1[($, yCO)) is cast in the form

zCO) = d(O) + HzCOJ,

with d(O) = a.

(2.1)

(2.2)

(2.3)

(2.4)

(2.5)

Now PMC (i.e., Algorithm vlf0 = Jlf(1L 0 , zC0)))5 is applied to (2.4),

with a Markov probability matrix P and a scoring scheme !: (these are generally chosen once and for all), to yield a stochastic estimate zCO) of zC0), with error Llo = <\ which is usually estimated by the sample

standard deviation [ssd] CYo of the scores whose average is zCO).

We then obtain the new iterate

yCl) = y(O) + zCO) = zCO). (2.6)

For each r ~ 1, m the same way, Algorithm Jl!r = v!f(1Lr, zCr))

then computes

whence, by (1.9) and (2.1),

d(r) = a+ (Hx- Hz(r))- (x- z(r)) = z(r)- Hz(r)

orB

5

6

See (1.4), (1.5), and (1.8).

Compare (2.4).

z(r) = d(r) + Hz(r).

-3-

(2.7)

(2.8)

Page 5: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

Algorithm Jtfr solves this problem by PMC, yielding an estimate z<rl, and we accumulate

r

y<rl = I. z(q)_

q=O

(2.9)

It is then known that, if the problem is intrinsically convergent, a condition ensured by making

p(H) < 1, p(H+) < 1, p(K) < 1,

where p(A) denotes the spectral radius of a matrix A, 7

(H+) .. = IH--1 ') ')

and H-.2

(K). = ~p' ; ') ..

lj

(2.10)

(2.11)

and, further, if the number of scores computed in each sequential iteration is sufficiently large,s then the error L1r in the stochastic estimate z<rl, measured by its ssd, ar, decreases geometrically [linearly, exponentially]; i.e., there are constants 5 and ~' such that I ~I < 1 and

(2.12)

3. SEQUENTIAL MONTE CARLO FOR NONLINEAR SYSTEMS

Another example IS a more complicated application of SMC, for a nonlinear problem,

F(v) = 0,

using Newton's method. By Taylor's Theorem, if we write

v = w<n) + x<nl,

7

8

This is defined as the supremum of all absolute values [moduli] of eigenvalues:

max { 1-tl: (:lx"' 0) Hx = .tx ).

This number can be the same for all sequential iterations (stages).

--4-

(3.1)

(3.2)

Page 6: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

then F(v) = F(w(n) + x<n))

1 = F(w(n)) + (x(n),V)F(w(n)) + 2(x(n).V)2F(w(n))

1 + 6Cx(n),V)3F(w(n)) + ... = 0. (3.3)

We linearize this problem to yield the approximation

i.e.,

F(w(n)) + (x(n),V)F(w(n)) = 0,

J(w(n))x(n) = - F(w(n)),

where J(w(n)) is the value of the Jacobian matrix ofF at w<n):

()F. (J(w))ij = aw'·.

J

(3.4)

(3.5)

(3.6)

Now, we select an invertible (regular, non-singular) matrix G(n) and put

and

a<n) = - G(n)F(w(n))

H(n) = I- G(n)J(w(n)),

(3.7)

(3.8)

yielding, by (3.5), that

H(n)x(n) = x<n)- G(n)J(w(n))x(n)

= x<n) + Q(n)F(w(n)) = x<n)- a<n) ,

i.e., (3.9)

The analogy to (2.1) is immediate. We now apply SMC, as in §2,9 to the solution of (3.9). This entails successive sequential stages, yielding stochastic estimates z<n,r) of z(n) with ssd dn,r). As in §2, it is then known that the SMC method is intrinsically convergentlO if, for all n,

(3.10)

and if the number of scores computed in each sequential iteration is sufficiently large. We then know that, if 0 < a < 1, and we continue

9

10

This is, in essence, (2.2)-(2.9), with the added superscript (n). Note: Algorithm v/lr

is now the entire SMC algorithm of §2, not PMC.

See (2.12).

-5-

Page 7: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

the n-th Newtonian iteration's SMC until

11zCnl11 3 Jnr) ~

[J' , < a llzCn-1)11 2, ~

we can find constants 5 and ~, such that I ~I < 1 and

i.e., the convergence is quadratic, as in Newton's method.

4. THE EIGENVALUE PROBLEM

The eigenvalue equation is

Hx = 1\x,

with X ct. 0.

(3.11)

(3.12)

(4.1)

(4.2)

If xis a solution (called an eigenvector), so is any multiple KX, so long as K

is not zero. Thus, x really identifies an eigendirection. The eigenvector x and the eigenvalue A are then said to belong to each other. If eigenvalues A and f1 both belong to the same eigenvector x, we see by (4.1) and (4.2) that

Hx = 1\x = .ux and x ct. 0;

so that A = fl, (4.3)

i.e., each eigenvector belongs to only one eigenvalue. Similarly, if eigenvectors x andy both belong to the same eigenvalue A, then

Hx = 1\x and Hy = Ay,

whence, for any a and [3,

H(ax + [3y) = A( ax+ [3y); (4.4)

More generally, we see that there is always an entire eigensubspace belonging to any given eigenvalue, and all such eigensubspaces are disjoint [as is usual in vector space theory, we ignore the null vector 0, which is in every subspace, but is not an eigenvector].

As is well known, the eigenvalues of a given (m x m) matrix H are solutions of the polynomial equation of degree m,

-6-

Page 8: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

det(H- AI) = 0, (4.5)

which is called the characteristic equation. It has just m solutions ..1.1 , ..1.2 , ... , Am (if we take multiplicity into account), which can always be ordered so that

(4.6)

In the general case, the set of all eigenvectors can (if necessary) be augmented by further vectors, to form a base of the m-dimensional vector space. In this paper, we shall assumell that

I ..1.11 > I~ I > I Jca I > ... > I Am I ;;:: o, C4.7)

and then corresponding eigenvectors x 1 , x 2 , ... , xm are linearly independent, so that any vector v can be written uniquely as

Suppose that we have a v such that

~1 * 0 and ~2 * 0. (4.9)

[Statistically, this is almost surely the case.] Then

11

Hv = ~1A1X1 + ~2~X2 + ··· + ~mAmxm. (4.10)

The complications ansmg if eigenvalues are not distinct are well-understood and their treatment, while not easy, is described in the literature; appropriate techniques are available. In a statistical sense, it is highly unlikely that any two eigenvalues should be equal, or should even have the same magnitude, unless the particular class of problems considered constrains such equality.

-7-

Page 9: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

5. THE POWER METHOD

Write O(f(n)) for an asymptotic characterization of a vector whose norm increases with n --7 = no faster than a given (usually, a simple) function f{n).l2 Then we see, by (4.8) and (4.9), that we can write

uCO) = v (5.1) and, as r --7 =,

uCr) = Hrv = ~1A,{x1 + ~2A,{x2 + 0( I Aa I r). (5.2)

If we put 1C = ~z'~1 , a = Az'-1,1, and f3 = Aa/,1,1, (5.3)

so that, by (4.7), I f31 < I al < 1, (5.4)

we see, by (5.2), that, as r --7 =,

(5.5)

or (5.6)

In particular, (5.7)

i.e., for all i, (5.8)

Since an eigenvector really identifies an eigendirection,13 we can always assume, without loss of generality, that the base vectors are all normalized, each having at least one maximal component having value 1, i.e.,

with

(5.9)

(5.10)

Of course, this defines the indices h 1, h 2 , ... , hm, to within a possible

12

13

This is an extension of the well-known "big-Oh" notation for asymptotics: the boldface 0 indicates a vector.

See the explanation just after (3.10).

-8-

Page 10: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

(unimportant) amount of ambiguity [if several components xih· have ' magnitude lxih·l = 1]. By (4.7), (4.9), (5.8), and (5.10), we observe that,

' for all sufficiently large r,

and

Now let us scale the vectors u<rl defined above, by the relation

z<Ol = (1/a0)u<0l = (l!a0)v

z(r) = (1/ar)u<rl.

If we put z(r) = (11-rr)Hz(r-1),

then, by (5.1), (5.8), and (5.14),

z(r)

whence

and

= (1/a )u(r) = (1/r )Hz(r-1) = (1/r T )H2z(r-2) r r r r-1

= ... = (1hr Tr_1 ... T1)Hrz(O) = (1/Tr Tr_1 ... T1a 0)Hrv

= (1/Tr Tr-1"'1'1 O'o)u<rl,

Tr = a/ar-1·

If we now specifY that the vectors z<r) be normalized, too-

II z<rl II = 1, =

then, by (5.8), larl = llu(r)ll · =

(5.11)

(5.12)

(5.13)

(5.14)

(5.15)

(5.16)

(5.17)

(5.18)

We note that the ar are not yet known beyond their absolute values [moduli], so that we can choose their sign (or phase angle, if they are complex) arbitrarily. As for the base vectors, we assume that at least one maximal

( ) · 14 (r) component of z r has value 1. Let this component be zk . Then, by (5.4), r

(5.5), (5.11), and (5.8),15

14

15

(5.19)

This is modulo the ambiguity referred-to just after (5.10).

In the case of any ambiguity, we try to keep the index kr constant, rather than allow it to fluctuate unnecessarily.

-9-

Page 11: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

and, for all sufficiently large r, we can put

(r) Zh1 = 1;

i.e.,

Furthermore, for these values ofr, by (5.14),

'~"r = (Hz(r-1))h1·

Also,

so

Therefore, by (4.9), (5.4), (5.13), (5.16), and (5.23), as r --> co,16

r = (Hz(r-1)) = (1/a ) (Hu<r-1l) r h1 r-1 h,

;1A.{[xlh + x:a'x2h + 0( I ,6 I')] = ~1A,{-1[1 + x;:ar-1x2hl + 0( I ,B I r-1 )]

= A-1{1- K(1-a)ar-lx2h1

+ 0( I ,6 I')} = A-1{1 + 0( Ia I r)}. Thus

(5.20)

(5.21)

(5.22)

(5.23)

(5.24)

(5.25)

(5.26)

Furthermore, the convergence is geometric, since the error is given by (5.4) and (5.25) as 0( I al r). This is called the Power Method.

16 Here, we use the well-known properties of asymptotic expressions, that

(i) O(i al'-1) = O(i air),

and (ii) {1 + 0( I a I 'J}/{1 + 0( I a I 'l} = 1 + 0( I aIr).

-10~

Page 12: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

6. RAYLEIGH QUOTIENTS

If the matrix H has additional properties, even faster methods are available. For example, ifH is Hermitian, i.e.,17

H = H* or ('ifi,j) Hij = ~i*, then x;*Hxi = xi*(Hxi) = x;*(Aix) = Aix;*xi

= (xi*H)xi = (x;*H*)xi = (Hx)*xi = (Aix)*xi

whence

that is, all the eigenvalues are real; and ifi * j and Ai * A;,

so that

X·*A·X· = kx·*X· l 7 7 7 l 7

= (Hx-)*x- = (A·Y·)*x- = l 7 r< 7

Y·*X· = 0 -, 7 ,

(6.1)

= ~-*x-*x­"'i l l'

(6.2)

(6.3)

i.e., the eigenvectors belonging to distinct eigenvalues are orthogonal. Hence, by (5.5), (5.8), and (6.3), with a little simplification, we get that

1\r = (Hz(r))*z(r) I z(r)*z(r) = (Hu(r))•u(r) I u(r)*u(r)

= ~1Ar1{x1* + K(a*t+1x2* + 0( lf31r)}{x1 + mrx2 + O( lf31r)}

~1A{{x1* + K(a*Yx2* + 0( lf31r)}{x1 + mrx2 + 0( I ,air)}

=A xl*x1+mlal2rx2*x2+0(I,BI2r) 1 x1*x1 + ICI a l 2rx2*x2 + 0( I ,BI 2r)

1 + 0( I al 2r) = A11+0(lal2r) = ~[1+0(lal2r)]. (6.4)

Thus, (6.5)

17 For a possibly complex number z = x + iy, where x and y are real numbers, z* denotes the complex conjugate number, z* = x- iy. For a possibly complex matrix, H = L + iM, again with matrices L and M real, H* denotes the Hermitian transpose, H* = LT- iMT, as indicated in (6.1).

-11-

Page 13: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

and, furthermore, the convergence is twice as fast as for the regular Power Method. 1Rr is called the Rayleigh quotient.

7. THE ITERATE DIFFERENCE

Now consider the difference zCr)- zCr-1l. By (5.4), (5.13), (5.16), (5.14), and (6.1),

z(r)- zCr-1) = (1/crr)uCr)- (1/crr-1)uCr-1)

= (1/crr){~1 A{[x1 + x:arx2 + 0( I 131 r)]}

- (1/crr-1) {~1A{-1 [x1 + mr-1x 2 + 0( I 131 r-1)]}

= (~1A{Iar){(1- rr/A1)x1 + K(1- rr!A2)arx2 + 0( 1131r)}

= {1 + 0( lalr)}{(1- rr/A1)x1 + K(1- rr/~)arx2 + 0( 1131r)}.

We observe, by (5.25), that

1- rriA1 = K(1- a)ar-1x2h1

+ 0( I 131 r) = ar 0(1), (7.1)

while 1- rr/~ = 0(1); (7.2)

so that

z(r)- z(r-1) = a' {1 + 0( I aIr)} {Ax1 + Bx2 + 0( I 13 Ia I r)}, (7.3)

where A and B are constants independent of r. Thus, we can estimate both A2 (through a) and x 2 (since we presumably have estimates of A1 and x 1) from the sequence of differences, much as we estimate the principal

eigenvalue A1 and eigenvector x 1 from the zCrl. Here, again, convergence is geometric.

-12-

Page 14: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

8. THE INVERSE POWER METHOD

We return to the eigenvalue problem-equations (4.1) and (4.2)­discussed in §4-§7. We shall assume that all eigenvalues of the matrix H are of different magnitudes18, and also that the matrix H is Hermitian,19 so that its eigenvalues lli are real and its eigenvectors xi form a base ofm-space and are orthogonal, as in (6.3); indeed, we can assume that they are orthonorma[:20

(8.1)

The theory presented earlier still applies, slightly modified by the new normalization. We see that (5.9) and (5.10) are replaced by

(8.2)

and (5.17) and (5.18) are replaced by

llz(r)ll2

= 1 (8.3)

and (8.4)

Thus, by (5.14) and (5.19) (which still holds),

x1 - (1/rr)Hx1 = (1/rr)/llx1;

which implies (5.26), as before. Furthermore, the power method of §6 is entirely unchanged.

l8 See (4.7).

19 See (6.1).

20 Here, 8ij is "Kronecker's delta" -8ij = 1 if i = j, 8ij = 0 if i ¢ j.

This assumption replaces the normalization implied in (5.9) and (5.10).

Instead of the L = norm, II v t = max 1,1,m I v1 I, we use the L 2 norm, II v 112

= (v*v)l/2 = (!I v .J2Jll2. j=l J

-13-

Page 15: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

Now let f1 be any real number and consider the matrix

M = H-J.ll,

Clearly, for any eigenvector xi ofH,

Mx· = (H- f.ll)x· = (Jl.- f.l)X· ~ ~ ~ ~

If we assume that M is invertible,21 and write

so that

then22

N = M-1 = (H- f.lll-1,

MN = NM =I,

Therefore, by (4.8) and (4.10),

Mv = ~p'1- f1)x1 + ~2(Az- f1)x2 + ··· + ~m(Am- fl)Xm

and

(8.5)

(8.6)

(8.7)

(8.8)

(8.9)

(8.10)

(8.11)

That is to say, the vectors xi are also eigenvectors of the matrices M and N.

Now consider carrying out the power method with the matrix N instead of H. Suppose that f1 is nearest to the eigenvalue As, and nearer to it than to any other eigenvalue, and write

Ai = f1 + vi. (8.12)

Then min l:5i$m I Ai - f.ll = I A8 - f.ll, (8.13)

or min1:;;i,;m I vi I = I v" I. (8.14)

If we define the parameter TC by

then

21

22

0 < I vs I < TC = mini;rs I Ai - As I , (8.15)

This assumption is equivalent to assuming that f.l is itself not an eigenvalue of H.

To obtain (8.9), premultiply (8.6) by N and divide by the (presumed non-zero) scalar A.i - f.l.

-14-

Page 16: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

so that mini"" h I ;::: 1( + I vs I ,

whence maxi;ts {I vi l-1} $ (IC+ I v8 I )-1 < r1.

I vs I Let us also write 0 <a= 7C+Iv8 l < 1.

Arguing just as in §5, we see that, if23

u<O) = v and

then, as r ~ oo,24

u<r) =

u<r> = s8v

8-rx

8+0((1C+ lv

8l)-r) = S

8V

8-r[x

8+0(aT)].

As before, we can normalize the uCr) as z(r)with (8.3),25 and then

whence26

and so27

Hence, as r ~ =,

23

24 See (5.1) and (5.2).

Compare (5.6).

25 See footnote 20. 26 Observe that, by the Binomial Theorem, since I a I < 1, as r --> =,

[1 + oca2r)j112 = 1 + ~ 0(a2r)- ~ 0(a4r) + . . . = 1 + oca2r).

(8.16)

(8.17)

(8.18)

(8.19)

(8.20)

(8.21)

(8.22)

(8.23)

27 See footnote 16. Furthermore, the sign of x8

is ambiguous. By keeping the sign

of some component of z(r), say zi), constant, as r --> =, we can always make

z(r) - x8

, rather than z(r) - -x8

-15-

Page 17: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

y(r) = Nz(r-1) = ( (r-1)* (r-1))-112 (r) u u u ,

then28 If[ 1 u(r-1)* u(r-1)

r-1 = y(r)* z(r-1) = u(r)* u(r-1)

~s 2vs -2r+2 [1 + oca2r)] vs [1 + ocJlr)]. = ~s 2vs -2r+1 [1 + 0(a2r)] = (8.25)

Thus, (8.26)

or, by (8.12), (8.27)

This shows that this inverse power method converges to A-8 , the nearest eigenvalue to Jl, as r --7 =, and the convergence is again geometric.

The computational procedure is thus as follows.

0. Begin with an arbitrary vector v = u(O) and an arbitrary real number Jl. Define z(O) = (v*v)-lf2v, so that JJz(OlJJ = 1. Taker = 1.

2

1. 1. Define the matrix

M = H-J.II. (8.28)

u. Solve the equation

My(r) = z(r-1) (8.29)

and put z(r) = (y(r)* y(rlf1/2y(rl. (8.30)

2. Compute If[ = (y(r)* z(r-1))-1. r-1 (8.31)

3. Increment r. Repeat [1] and [2] until

JJ z(r)- z(r-1) 112 < e. (8.32)

28 Compare (6.4), given (8.3).

-16-

Page 18: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

9. ACCELERATING THE INVERSE POWER METHOD

We know that the inverse power method (described above) converges geometrically to ll

8• It is also clear that the rate of convergence

of the method is governed by the constant a defined in (8.12)-(8.18). As V8

decreases, so does a, and the convergence becomes faster. Thus, if we change J1 so as to decrease v

8 in the course of the computation,

we can only accelerate the convergence of the method to ll8 •

By (8.24) and (8.25), we can use I!Jr_1

= ((Nz(r-1))* z(r-1)))-1

= ( u (r-1)* u (r-1lfu (r)* u (r-ll)-1 to approximate v 8

• Suppose, therefore,

that we replace the Rayleigh quotient method outlined in §8 above by a two step sequential method as follows.29

0. Begin with an arbitrary vector v = uCO) and an arbitrary real number 11 = ,uCOl. Define z(O) = (v*v)-112v, so that llz(O)II = 1. Taker = 1.

2

1. 1. Define the matrix

M(r-1) = H _ ,uCr-1lJ.

n. Solve the equation

and put

2. 1. Compute

11. Put

3. Increment r.

M(r-1)y(r) = z(r-1)

z(r) = (yCr)* yCrlf112 y(rl.

I!Jr_1

= (yCr)* zCr-1))-1.

11cr) = ,u(r-1) + I!Jr_1

.

Repeat [1] and [2] until

II z(r)- z(r-1) 112 < £.

(9.1)

(9.2)

(9.3)

(9.4)

(9.5)

(9.6)

29 Compare the closely similar procedure in (8.28)-(8.32). The initialization is (0]; the two major steps are [1] and [2]; the conditional looping command is [3]. The crucial change is the iterative improvement of /l(r) through (9.5), in [2][ii].

-17-

Page 19: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

The change from the inverse Rayleigh quotient method is solely in the line [2][ii]; in the original method, all f../(r) = !1(0). Note, too, that the application of the-Monte Carlo method is to perform [l][ii] by PMC.30

For the original method, we get, by (8.22), that

whence

I.e.,

Similarly, by (8.25),

z(r) = [1 + O(a2r)] X8

+ O(a'),

z(r)- x8

= O(a');

lro:r-1- vs I = I ( 1/[r-1 + f.l) -As I = oca2r).

(9.7)

(9.8)

(9.9)

(9.10)

In the new algorithm, it is clear that v8

must be replaced by a changing parameter,

v(r) = A - "(r) = A - "(r-1)- If[ = )r-1) If[ s s ,.. s ,.. r-1 s - r-1' (9.11)

Going over the previous line of argument, we observe that, if we now write

then31

whence32

. 33 I.e.,

r-1 r-1 Jv;t)l r-1 I v~t)~ pr =IT a<t) =II-~ ~IT I (tl

t;() t;() t;() ~ + vs

z(r) = [1 + O(Pr 2)] x8

+ O(Pr),

z(r) - x = O(P ) s r '

(9.12)

(9.13)

(9.14)

(9.15)

30

31

32

33

It is also possible to replace the scalar products in (9.3) and (9.4) by MC estimates.

Compare (9.7).

Compare (9.8).

Compare (9.9).

-18-

Page 20: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

The counterpart of(8.25) is now, by (9.4),

and so

?IT"' (r-1) [ 2 ] O&r_1 = V8 1 + O(Pr ) ,

I f!J - )r-1)1 = I ( f!J + ,uCr-1)) -A I = O(P 2). ~1 s ~1 s r

(9.16)

(9.17)

Note that, because each ,u(r) is an improvement on its predecessor,

and therefore each v;r) is less than its predecessor, P r will necessarily

be less than the corresponding a! in (9.7)-(9.10).

By (9.11) and (9.16), we see that

v;r) = ~r-1)- v;r-1) [1 + O(Pr2)] = ~(r-1) 0(Pr2). (9.18)

We can attempt to solve the relations (9.12) and (9.18) by putting, for some

Q <': 1, C > 1, and 0 < q < 1,

that

r-1 r-1

Then, by (9.12),

= Qr q1+C+C2+ ... +cr-1 = Qr q(CT-1)/(C-1),

so that we need, by (9.18), that

cr-er-1 a(r) q _ aCr-

1) = O(Q2r q2(CT-1)!(C-1)).

(9.19)

(9.20)

(9.21)

(9.22)

Since, by our assumption, C > 1 and q < 1, and (9.22) should hold for all sufficiently large r, we need

This holds for all r if

i.e., if

Cr cr-1 2(Cr- 1) 2Cr

- <': C-1 <': C-1·

(C- 1)2 <': 2C,

c2 - 4C + 1 <': o,

-19-

(9.23)

Page 21: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

. "f34 1.e., 1 c <': 2 + ..J3 "" 3.732.

The tightest bound for (9.20) and (9.22) is clearly C = 2 + ..J3, i.e.,

I v;r) _ A q(2+..JS)' , A qS.7s2r I ,

where A = KQ. Hence, by (9.21),35

pr _ B Qr q[('/3-1)/2] (2+-.JS)r , B Qr (q0.366)S.7S2r,

where B = q-O.S66. Of course, by (9.19),

(9.24)

(9.25)

(9.26)

0 < q < q t = q(..JS-1)/2 , qO.S66 < 1, (9.27)

and, as r -7 =, however large Q may be, Qr p (2+-.JSV tends to zero faster than (q t + e) (2+..J3)' for any E > 0, however small. For relative simplicity, let us write, for some small £,

p = qt + E; (9.28)

then, by (9.26), I pr = o(p(2+..JSl') , o(pS.7S2r) I· It now follows from (9.15) that

Ill z(r)- Xs 112 = o(p(2+..JSl') "" o(p3.7S2r) I and from (9.17) that

I '{[Jr_1

_ v;r-1)1 = 0 (p2x(2+..JSl') = o((p2)C2+..JS)r) , o((p2)S.732r)

(9.29)

(9.30)

(9.31)

This shows that the accelerative, sequential method increases the convergence of the process from linear (sometimes also called geometric or exponential) to more-than-cubic. By comparison, Newton's method, which has stood the test of time for over 300 years, converges only quadratically.

To illustrate this concept in a different way, suppose that we gain d

34 The full solution is C e (-oo, 2- -iS] v [2 + ..Js, oo); the former range is inadmissible, by our assumption that C > 1.

35 (C- 1)-1 = ('Is + 1r1 = (-./s- 1V(S- 1l = Us- 1)/2 ~ o.s66.

-20-

Page 22: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

decimal places of accuracy by a single iteration of a linearly-converging method. Then this is because the error is multiplied by 10-d. Thus, subsequent errors after this will be multiplied by 10--<l also; so that we shall gain d decimal places of accuracy at each iteration. However, if we were to gain d decimal places of accuracy by a single iteration of a quadratically­converging method, like Newton's method, then we would have, for some T > 0 and 0 < t < 1,

10--<l = - t2r - , (9.32)

and subsequent errors would be Tt 2r+Z, Tt 2r+S, and so on, successively multiplying the current error, first, by t2r+l = (t2r)2 = I0-2d, then by t2r+

2

= (t2r)22 = 1o-4d, and so on, doubling the number of accurate decimal digits gained at each step. This is what leads to the spectacular convergence for which Newton's method is rightly admired. Finally, if we use the present method, it is easily seen that the number of accurate decimal digits gained at each step is successively multiplied by about 3. 732!

Although no serious setbacks have been encountered in preliminary computational experiments, two notes of caution must be made. First, as Jl(r) ---7 A

8, M(r) becomes increasingly near-singular and correspon-

dingly ill-conditioned, and its reciprocal N(r) grows accordingly, and therefore the equation (9.2) becomes increasingly difficult to solve accurately. This difficulty is intrinsic and puts a limit on the accuracy obtainable by this method.

Secondly, it is sometimes difficult to get the eigenvalue and eigenvector we choose, unless we begin rather close to the desired values. This is a familiar problem with iterative methods [compare the classic inverse power method, or Newton's method].

10. ACKNOWLEDGEMENT

I am indebted to the Los Alamos National Laboratory for organizing, and inviting me to participate in, a Workshop on Adaptive Monte Carlo Methods at their Center for Non-Linear Systems, in August 1996. At this workshop, I had the opportunity to hear of recent work and to present some of my own. More particularly, I am happy to have had the opportunity to discuss some of the material presented in §4-§9 of this present paper, especially with Dr T. E. Booth, Dr A. R. Forster, and Dr. T. T. Warnock of LANL, and Dr E. W. Larsen of the University

-21-

Page 23: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

of Michigan at Ann Arbor. They will doubtless recognize my efforts to find satisfactory answers for their stimulating questions; any remaining shortcomings are all mine.

-22-

Page 24: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

11. REFERENCES

W. F. AMES, 1992. Numerical Methods for Partial Differential Equations, Third Edition, Academic Press, New York (1992) 451 pp.

J.-P. AUBIN, 1972. Approximation of Elliptic Boundary Value Problems. Wiley-Interscience, New York, NY (1972) 360 pp.

0. AxELSSON, V. A. BARKER, 1984. Finite Element Solution of Boundary Value Problems-Theory and Computation, Academic Press, New York (1984) 432 pp.

K. BAGGERLY, 1996. Theory for exponential convergence, Part III: interfacing theory and practice (invited presentation at LANL I CNLS Workshop on Adaptive Monte Carlo Methods, Los Alamos National Laboratory, Los Alamos, NM, August 1996).

R. BEAUWENS, P. DE GROEN, 1992. Editors, Iterative Methods in Linear Algebra (Proceedings of the IMACS International Symposium, Brussels, Belgium, 1991), North-Holland/Elsevier, Amsterdam, The Netherlands (1992) 636 pp.

T. E. BOOTH, 1996. Adaptive Monte Carlo attempts on a continuous transport problem (invited presentation at LANLICNLS Workshop on Adaptive Monte Carlo Methods, Los Alamos National Laboratory, Los Alamos, NM, August 1996).

N. P. BUSLENKO, D. I. GOLENKO, YU. A. SHREIDER, I. M. SOBOL', V. G. SRAGOVICH, 1962. The Method of Statistical Trials-The Monte Carlo Method, edited by YU. A. SHREIDER; Fizmatgiz, Moscow, USSR (1962) [in Russian]; Elsevier, Amsterdam, Netherlands (1964) 312 pp.; Pergamon Press, Oxford, England (1966) 390 pp.

L. L. CARTER, E. D. CASHWELL, 1975. Particle Transport Simulation with the Monte Carlo Method, Tech. Inf. Ctr., ERDA, Oak Ridge, TN (1975) 121 pp.

R. COURANT, K. 0. FRIEDRICHS, H. LEWY, 1928. On the partial difference equations of mathematical physics, Math. Ann. 100 (1928) pp. 32-74 [in German].

R. R. COVEYOU, 1960. Serial correlation in the generation of pseudo­random numbers, ACM Journal 7 (1960) pp. 72-74.

-23-

Page 25: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

D. Cox, 1996. Theory for exponential convergence, Part II: continuous state spaces (invited presentation at LANLICNLS Workshop on Adaptive Monte Carlo Methods, Los Alamos National Laboratory, Los Alamos, NM, August 1996).

J. H. CURTISS, 1949. Sampling methods applied to differential and difference equations, Seminar on Sci. Comp., IBM Corp., New York, NY (1949) pp. 87-109.

J. H. CURTISS, 1954. Monte Carlo methods for the iteration of linear operators, J. Math. Phys. 32 (1954) pp. 209-232.

R. E. CUTKOSKY, 1951. A Monte Carlo method for solving a class of integral equations, J. Res. Nat. Bur. Stand. 47 (1951) pp. 113-115.

G. DAHLQUIST, A. BJORCK, 1974. Numerical Methods, Prentice-Hall, Englewood Cliffs, NJ (1974) 573 pp.

H. P. EDMUNDSON, 1953. Monte Carlo matrix inversion and recurrent events, Math. Tab. Aids Comp. 7 (1953) pp. 18-21.

S. M. ERMAKOV, 1975. The Monte Carlo Method and Contiguous Questions, Nauka, Moscow, USSR; First Edition (1971) 328 pp.; Second Edition (1975) 472 pp. [in Russian].

G. E. FORSYTHE, R. A. LEIBLER, 1950. Matrix inversion by a Monte Carlo method, Math. Tab. Aids Comp. 4 (1950) pp. 127-129.

G. E. FORSYTHE, C. B. MOLER, 1967. Computer Solution of Linear Algebraic Systems. Prentice-Hall, Inc., Englewood Cliffs, NJ (1967) 148 pp.

G. E. FORSYTHE, W. R. WASOW, 1960. Computer Solution of Linear Algebraic Systems, John Wiley & Sons, Inc., New York (1960) 444 pp.

P. W. G. GLYNN, 1996. On Markov processes and adaptive algorithms (invited presentation at LANLICNLS Workshop on Adaptive Monte Carlo Methods, Los Alamos National Laboratory, Los Alamos, NM, August 1996).

J. H. HALTON, 1962. Sequential Monte Carlo, Proc. Camb. Phil. Soc. 58 (1962) pp. 57-78.

J. H. HALTON, 1965a. A general formulation of the Monte Carlo method and a 'strong law' for certain sequential schemes, Brookhaven National Laboratory, AMD 378/BNL 9220 (1965) 14 pp.

J. H. HALTON, 1965b. Least-squares Monte Carlo methods for solving linear systems of equations, Brookhaven National Laboratory, AMD 388/BNL 9678 (1965) 74 pp.

-24-

Page 26: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

J. H. HALTON, 1966. On the strong convergence of linear averages, Univ. Wisconsin, Madison, MRC 719 (1966) 8 pp.

J. H. HALTON, 1967. Sequential Monte Carlo (Revised), Univ. Wis., Madison, MRC 816 (1967) 38 pp.

J. H. HALTON, 1970, A retrospective and prospective survey of the Monte Carlo method, SIAM Review 12 (1970) pp. 1-63.

J. H. HALTON, 1990. Monte Carlo methods for solving linear systems of equations, 18 pp. (invited presentation at NSF-CBMS Res. Conf Random Number Generation I Quasi-M ante-Carlo Methods, Fairbanks, Alaska, August 1990).

J. H. HALTON, 1991a. The Monte Carlo solution of linear systems. Univ. North Carol., Chapel Hill, Working Paper (1991) 132 pp.; reprinted in J. H. Halton, Readings on the Monte Carlo Method (1992) 404pp.

J. H. HALTON, 1991b. An introduction to the Monte Carlo solution of linear systems, 18 pp. (invited presentation at IMACS International Symp. Iterative Methods in Linear Algebra, Brussels, Belgium, April 1991).

J. H. HALTON, 1991c. Some new results on the Monte Carlo solution of linear systems, including sequential methods, 20 pp. (invited presentation at IMACS International Symp. Iterative Methods in Linear Algebra, Brussels, Belgium, April 1991).

J. H. HALTON, 1994. Sequential Monte Carlo techniques for the solution of linear systems. Journal of Scientific Computing 9 (1994) pp. 213-257.

J. H. HALTON, 1996. Sequential Monte Carlo techniques for solving nonlinear systems, 35 pp. (invited presentation at LANL I CNLS Workshop on Adaptive Monte Carlo Methods, Los Alamos National Laboratory, Los Alamos, NM, August 1996).

J. M. HAMMERSLEY, D. C. HANDSCOMB, 1964. Monte Carlo Methods, Methuen, London, England; John Wiley & Sons, New York (1964) 185 pp.

A. S. HOUSEHOLDER, 1964. The Theory of Matrices in Numerical Analysis, Blaisdell Publishing Co., New York, NY (1964) 257 pp.

E. ISAACSON, H. B. KELLER, 1966. Analysis of Numerical Methods, John Wiley & Sons, New York (1966) 541 pp.

M. H. KALOS, P. A. WHITLOCK, 1986. Monte Carlo Methods, Volume I: Basics, John Wiley & Sons, New York (1986) 186 pp.

-25-

Page 27: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

J. P. C. KLEIJNEN, 1974. Statistical Techniques m Simulation, Marcel Dekker, New York, Part I (197 4) 300 pp.

J. P. C. KLEIJNEN, 1975. Statistical Techniques m Simulation, Marcel Dekker, New York, Part II (1975) 503 pp.

E. W. LARSEN, 1996. An adaptive Monte Carlo method for global neutron transport calculations (invited presentation at LANL I CNLS Workshop on Adaptive Monte Carlo Methods, Los Alamos National Laboratory, Los Alamos, NM, August 1996).

A. W. MARSHALL, 1956. The use of multi-stage sampling schemes in Monte Carlo, Univ. Fla., Gainesville, Symp. Monte Carlo Methods, I954, edited by H. A. MEYER; John Wiley & Sons, New York (1956) pp. 123-140.

G. W. MCKINNEY, R. KONG, 1996. Error reduction using adaptive Monte Carlo-the reduced source method (invited presentation at LANL/CNLS Workshop on Adaptive Monte Carlo Methods, Los Alamos National Laboratory, Los Alamos, NM, August 1996).

M. E. MULLER, 1956a. Some continuous Monte Carlo methods for the Dirichlet problem, Ann. Math. Stats. 27 (1956) pp. 569-589.

M. E. MULLER, 1956b. On discrete operators connected with the Dirichlet problem, J. Math. Phys. 35 (1956) pp. 89-113.

J. M. ORTEGA, 1988. Introduction to Parallel and Vector Solution of Linear Systems, Plenum Press, New York, NY (1988) 305 pp.

J. M. ORTEGA, W. G. POOLE, 1981. An Introduction to Numerical Methods · for Differential Equations, Pitman Publishing, Inc., Marshfield, MA

(1981) 329 pp.

J. M. ORTEGA, W. C. RHEINBOLDT, 1970. Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, Inc., New York, NY (1970) 572 pp.

A.M. OSTROWSKI, 1966. Solution of Equations and Systems of Equations, Academic Press, Inc., New York, NY (1966) 338 pp.

E. S. PAGE, 1954. The Monte Carlo solution of some integral equations, Proc. Camb. Phil. Soc. 50 (1954) pp. 414-425.

R. PICARD, 1996. Theory for exponential convergence, Part 1: discrete state problems and interpolated smoothing (invited presentation at LANL/CNLS Workshop on Adaptive Monte Carlo Methods, Los Alamos National Laboratory, Los Alamos, NM, August 1996).

S. PISSANETZKY, 1984. Sparse Matrix Technology, Academic Press, Inc., London, England (1984) 321 pp.

-26-

Page 28: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

J. K. REID, 1971. On the method of conjugate gradients for the solution of large sparse systems of linear equations, Large Sparse Sets of Linear Equations (Proceedings of the Oxford Conference of the Institute of Mathematics and Its Applications, April 1970), edited by J. K. REID; Academic Press, London, England (1971) pp. 231-254.

W. C. RHEINBOLDT, 1974. Methods for Solving Systems of Nonlinear Equations, SIAM, Philadelphia, PA (1974) 104 pp.

R. D. RIGHTMYER, K. W. MORTON, 1967. Difference Methods for Initial­Value Problems, Interscience Publishers, Second Edition, New York (1967) 405 pp.

R. Y. RUBINSTEIN, 1981. Simulation and the Monte Carlo Method, John Wiley & Sons, New York (1981) 293 pp.

Y. SAAD, 1996. Iterative Methods for Sparse Linear Systems, PWS Publishing Co., Boston, MA (1996) 447 pp.

I. M. SOBOL', 1973. Monte Carlo Computational Methods, Nauka, Moscow, USSR, (1973) 312 pp. [in Russian].

J. SPANIER, 1996. General sequential sampling methods for Monte-Carlo and quasi-Monte-Carlo applications (invited presentation at LANL!CNLS Workshop on Adaptive Monte Carlo Methods, Los Alamos National Laboratory, Los Alamos, NM, August 1996).

J. SPANIER, E. M. GELBARD, 1969. Monte Carlo Principles and Neutron Transport Problems, Addison-Wesley, Reading, MA (1969) 248 pp.

G. W. STEWART, 1973. Introduction to Matrix Computations, Academic Press, New York (1973) 441 pp.

J. F. TRAUB, 1964. Iterative Methods for the Solution of Equations, Prentice­Hall, Inc., Englewood Cliffs, NJ (1964) 310 pp.

R. S. VARGA, 1962. Matrix Iterative Analysis, Prentice-Hall, Englewood Cliffs, NJ (1962) 322 pp.

A. WALD, J. WOLFOWITZ, 1950. Bayes solutions of sequential decision problems, Ann. Math. Stats. 21 (1950) pp. 82-89

W. WASOW, 1951a. Random walks and the eigenvalues of elliptic difference equations, J. Res. Nat. Bur. Stand. 46 (1951) pp. 65-73.

W. WASOW, 1951b. On the mean duration of random walks, J. Res. Nat. Bur. Stand. 46 (1951) pp. 462-471.

-27-

Page 29: On Accelerating Monte Carlo Techniques for Solving Large ... · John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS the n-th Newtonian iteration's

John H. Halton ON ACCELERATING MONTE CARLO TECHNIQUES FOR SOLVING LARGE SYSTEMS OF EQUATIONS

W. WASOW, 1951c. On the duration ofrandom walks, Ann. Math. Stats. 22 (1951) pp. 199-216.

W. WASOW, 1952. A note on the inversion of matrices by random walks, Math. Tab. Aids Comp. 6 (1952) pp. 78-81.

J. H. WILKINSON, 1965. The Algebraic Eigenvalue Problem, Clarendon Press (Oxford University Press), Oxford, England (1965) 662 pp.

J. WOLFOWITZ, 1946. On sequential binomial estimation, Ann. Math. Stats. 17 (1946) pp. 489-493.

J. WOLFOWITZ, 1947. The efficiency of sequential estimates and Wald's equation for sequential processes, Ann. Math. Stats. 18 (1947) pp. 215-230.

S. J. YAKOWITZ, 1977. Computational Probability and Simulation, Addison-Wesley, Reading, MA (1977) 262 pp.

D. M. YOUNG, 1971. Iterative Solution of Large Linear Systems, Academic Press, Inc., New York, NY (1971) 570 pp.

D. ZWILLINGER, 1992. Handbook of Differential Equations, Academic Press, Inc., San Diego, CA, Second Edition (1992) 787 pp.

-28-