11
Parallel Computing 19 (1993) 1105-1115 1105 North-Holland PARCO 792 Parallel solution of Fredholm integral equations of the second kind by accelerated projection methods Rajesh Aggarwal a,, David R. Dellwo b and Morton B. Friedman c a Division of Applied Mechanics, Universitd Catholique de Louvain, B-1348 Louvain-la-Neuve, Belgium b Department of Mathematics, U.S. Merchant Marine Academy, Kings Point, NY 11024, USA c Department of Civil Engineering and Engineering Mechanics, Columbia University, New York, N Y 10027, USA Received 28 May 1991 Revised 17 November 1992, 1 February 1993 Abstract Conventional projection methods for the numerical solution of integral equations are serial in structure, and require substantial synchronizationwhich seriously degrades their performance in a parallel environment. The present work focuses on the exploitation and implementation of a new class of computational techniques, known as accelerated projection methods, which are distinctly different from conventional methods, capable of high accuracy and which are particularly appropriate for concurrent computing. The methods are imple- mented in detail on a multiprocessor for a particular linear integral equation of interest. It is shown that accelerated projection methods possess a high degree of parallelism that leads to extremely efficient parallel algorithms. Keywords. Integral equation; accelerated projection methods; parallel implementation techniques; multipro- cessor system; speedup results; timing results 1. Introduction Projection methods provide a general formalism by which to construct approximate solutions of integral equations by projecting them onto finite dimensional subspaces. Well- known approximate methods such as Ritz-Galerkin, collocation, moments and least squares are projection methods [3,5,12,13,15,16,18]. As the applications of integral equations grow more complex abetted by the demand for increased accuracy, the conventional projection methods rely on finer and finer discretization creating attendant problems of solving ever larger systems of algebraic equations. Such problems include inefficiencies and possible instabilities of algorithms for solving very large non-sparse systems, accumulation of round-off errors, and limitations in the storage and speed of available computers. Accelerated projection and related methods [6,7,10,11], offer robust and rigorously founded means by which to circumvent the limitations of conventional numerical techniques. These new methods utilize only small systems of algebraic equations to provide highly accurate approximations to the solution of linear and nonlinear integral equations. An equally important feature of these methods is that they lend themselves to algorithmic procedures * Corresponding author. Tel: 32-10-472362, fax: 32-10-472180, email: [email protected] 016%8191/93/$06.00 © 1993 - Elsevier Science Publishers B.V. All rights reserved

Parallel solution of Fredholm integral equations of the second kind by accelerated projection methods

Embed Size (px)

Citation preview

Page 1: Parallel solution of Fredholm integral equations of the second kind by accelerated projection methods

Parallel Computing 19 (1993) 1105-1115 1105 North-Holland

PARCO 792

Parallel solution of Fredholm integral equations of the second kind by accelerated projection methods

Rajesh Aggarwal a,, David R. Del lwo b and Morton B. Friedman c

a Division of Applied Mechanics, Universitd Catholique de Louvain, B-1348 Louvain-la-Neuve, Belgium b Department of Mathematics, U.S. Merchant Marine Academy, Kings Point, NY 11024, USA c Department of Civil Engineering and Engineering Mechanics, Columbia University, New York, NY 10027, USA

Received 28 May 1991 Revised 17 November 1992, 1 February 1993

Abstract

Conventional projection methods for the numerical solution of integral equations are serial in structure, and require substantial synchronization which seriously degrades their performance in a parallel environment. The present work focuses on the exploitation and implementation of a new class of computational techniques, known as accelerated projection methods, which are distinctly different from conventional methods, capable of high accuracy and which are particularly appropriate for concurrent computing. The methods are imple- mented in detail on a multiprocessor for a particular linear integral equation of interest. It is shown that accelerated projection methods possess a high degree of parallelism that leads to extremely efficient parallel algorithms.

Keywords. Integral equation; accelerated projection methods; parallel implementation techniques; multipro- cessor system; speedup results; timing results

1. Introduction

Projection methods p rov ide a general formalism by which to construct approximate solutions of integral equations by projecting them onto finite dimensional subspaces. Well- known approximate methods such as Ritz-Galerkin, collocation, moments and least squares are projection methods [3,5,12,13,15,16,18]. As the applications of integral equations grow more complex abetted by the demand for increased accuracy, the conventional projection methods rely on finer and finer discretization creating attendant problems of solving ever larger systems of algebraic equations. Such problems include inefficiencies and possible instabilities of algorithms for solving very large non-sparse systems, accumulation of round-off errors, and limitations in the storage and speed of available computers.

Accelerated projection and related methods [6,7,10,11], offer robust and rigorously founded means by which to circumvent the limitations of conventional numerical techniques. These new methods utilize only small systems of algebraic equations to provide highly accurate approximations to the solution of linear and nonlinear integral equations. An equally important feature of these methods is that they lend themselves to algorithmic procedures

* Corresponding author. Tel: 32-10-472362, fax: 32-10-472180, email: [email protected]

016%8191/93/$06.00 © 1993 - Elsevier Science Publishers B.V. All rights reserved

Page 2: Parallel solution of Fredholm integral equations of the second kind by accelerated projection methods

1106 R. Aggarwal et al.

which can be mapped readily into parallel execution. This is not true for conventional projection methods in general [4,17]. Consequently, with the use of accelerated methods the promise of parallel computing can be realized - to gain speed almost proportional to the number of processors used.

The objective of this work is to establish experimentally that accelerated projection methods possess a high degree of inherent parallelism which can be exploited to provide extremely efficient parallel algorithms. Previous applications of accelerated projection meth- ods by the authors have focused on numerical issues and not on algorithmic matters. The algorithms in this work are implemented on a shared-memory multiprocessor computer. However they can be tailored to any particular type of machine architecture.

Section 2 provides a brief description of the accelerated projection methods in the case of linear operator equations of the Fredholm second kind. Section 3 deals with the formulation of a particular weakly singular linear integral equation as a test problem. Section 4 deals with parallel implementation and the related issues of parallel programming. The numerical results including timing details are provided in Section 5 and confirm the high degree of parallel efficiency achievable through accelerated projection methods. Parallel implementation in the case of nonlinear integral equations is discussed in [1,2].

2. Accelerated projection methods

Accelerated projection methods [7,10] provide an unconventional approach in which an approximate solution to an integral equation can be obtained to any desired accuracy with a fixed choice of the projection. Since the projection operator and consequently the subspace remain unchanged, the computational work associated with the solution of the relevant algebraic system is significantly reduced when compared to classical techniques which require projection onto a space of greater dimension for increased accuracy.

A concise development of these methods for linear operators corresponding to integral equations of second kind is presented below. More details about general theory and error analysis can be found in [10].

Consider the operator equation

y = Ky + g (2.1)

where y and g are elements of a Banach space Y and K is a compact linear operator on Y such that ( I - K) is invertible.

Accelerated projection schemes exploit the decomposition

y = _Py + Oy (2.2)

for a projection operator P, where Q = I - P, I is the identity operator and y is an arbitrary element in Y.

Thus the operator Eq. 2.1 can be written as

y = KPy + KQy + g. (2.3)

The operators P and Q applied successively to Eq. 2.3 yield the pair of equations

Py =PKI~ + PKQy + Pg (2.4)

Qy = QKPy + QKQy + Qg. (2.5)

Equation (2.5) can be solved for Qy in terms of Py:

Qy = ( I - Q K ) - l ( Q g + QKPy) (2.6)

Page 3: Parallel solution of Fredholm integral equations of the second kind by accelerated projection methods

Parrallel solution of Fredholm integral equations 1107

provided the operator ( I - QK) is invertible. Thus if the projection operator P is chosen so that

Ilag II--II(I-e)gll < 1,

then Qy will be given by Eq. 2.6. When Eq. 2.6 is substituted into 2.4 the result is an equation for the exact projection Py:

Py = PKPy + PK (1 - QK ) - I ( Qg + QKPy ) + Pg

= P K ( I + ( I - Q K ) - ' Q K ) P y + PK( I - a K ) - l a g + Pg

o r

Py - PK( I - Q K ) - 1 p y = pg + PK( I - Q K ) - tQg . (2.7)

Equation (2.7) has Py as its only solution because the operator

( I - PK( I - QK)-I) = ( I _ K)( I_ QK)-I

is invertible when ( I - QK) is invertible. If JJ QK [I < 1, then ( I - QK) -1 can be replaced by the series expansion

oo

( I - QK ) -1 = ~_~ ( Q K ) j. (2.8) j = 0

The truncation of the infinite series to (n - 1) terms of the expansion results in what is called the nth order accelerated projection scheme from Eq. 2.7:

n - 1 n - I

u , , - P K Y'. (QK)~u,, =Pg + PK ~., (QK)JQg , (2.9) j = 0 j = 0

where u, is an nth order approximation to Py. The error associated with the nth order accelerated projection scheme is shown in [10] to

be:

II pv - u . II = o(1[ K - e g I1") (2.10)

where O(-) denotes the usual order symbol. Therefore, the first order accelerated scheme, n---1, has an error estimate O( II K - P K II) which is essentially the same as that obtained from the conventional methods. Typically, II K - PK II = O ( 1 / N ) where N is the dimension of the subspace associated with the projection P. Thus the first order error is O ( 1 / N ) and the higher order accelerated schemes are in error O ( 1 / N ' ) and therefore it would require a significant increase in the computational effort to achieve the equivalent accuracy with the conventional methods.

Generally, the main objective is to obtain an accurate approximation at the node points. But this does not ensure an accurate interpolation between the nodes. Therefore a different operator equation is developed for the solution y in terms of the projection approximates u,. By applying the operator Q to Eq. (2.1) and rearranging the terms

( I - Q K ) y = P y +Qg.

Then the approximate solution is given by n--1

y,, = y ' (QK)J(u , , + Qg) . j = 0

From Eqs. 2.10 and 2.11, the error estimate is

II y - Yn II = o ( l l K - e K [l").

(2.11)

(2.12)

(2.13)

Page 4: Parallel solution of Fredholm integral equations of the second kind by accelerated projection methods

1108 R. Aggarwal et al.

So each approximation to Py provides a corresponding approximation to y with an error of the same order as that associated with the nodes.

The amount of computations required to determine the elements in the higher order accelerated schemes increases with the order. However, it is shown in Section 4 that the computations in accelerated schemes can be split into independent processes which makes these methods very attractive and amenable for parallel machines.

3. Application

The weakly singular linear integral equation of the second kind,

y(x ) = g ( x ) +Af 1 Ix-tl-'~y(t) dt, 0 < a < l , (3.1) - 1

with a weak singularity at x = t is considered as an example. This equation appears in the 1 theory of intrinsic viscosity [14]; the special case g(x) = x 2 and a -- ~ has been the subject of

many investigations [19,20]. The numerical techniques used in these earlier investigations were developed in the milieu of serial computers and are generally inefficient for parallel machines. This particular linear integral equation is a non-trivial problem and is used here to explore the power of accelerated projection methods on parallel machines.

Here the piecewise constant projection is used. This is defined as N

Py= Y'~y(x*)ei(x) , N > I, (3.2) i - I

where x* is midpoint of the interval (xi_l , x i] and ei(x) is the characteristic function such that

1, xi_ 1 <x <x i (3.3) e i ( x )= O, otherwise

The piecewise constant projection is simple and therefore widely used in practical computa- tions. The disadvantage of this projection is the slow rate of convergence, typical O(N-1/2), that is associated with the accompanying Galerkin method or the first order accelerated projection scheme. Therefore, this projection also serves to illustrate the use and the power of higher order acceleration schemes to accelerate the nodal convergence of the first order scheme.

It should be noted that each choice of N, the number of nodal points, defines a different projection operator which might more appropriately be labelled IN" Moreover, the size of the relevant algebraic system to be solved ultimately will correspond to N. However, it is convenient to drop the subscript N when manipulating PN symbolically in the work that follows.

It is essential to recognize that according to the theory of accelerated projection methods any choice of N, no matter how small, can be utilized. Of course, the smaller the N, the higher is the order n of the projection scheme required to achieve a given accuracy. Thus there is trade-off to be considered in choosing N. A 'large' N may require, for example, a 2nd or 3rd order projection scheme to produce a desired accuracy while a 'small' N may require a 4th or even higher order scheme to produce the same accuracy. Hence a 'small' N will result in an easily solvable algebraic system but one whose coefficients are given by a higher order projection scheme that involves a substantial amount of computations. However, these set-up computations are routine, parallelizable and do not induce the difficulties associated with solving a large dense algebraic system. This trade-off is explored computationally and discussed in Section 5, and is an essential feature of accelerated projection methods.

Page 5: Parallel solution of Fredholm integral equations of the second kind by accelerated projection methods

Parrallel solution o f Fredholm integral equations 1109

For the particular integral equation under consideration, the relevant single, double and triple integrals involved in first, second and third order methods can be evaluated analytically; this feature was taken advantage of in [8] to study accelerated projection methods in a parallel environment. But as the order of the accelerated scheme employed increases, it becomes increasingly difficult to perform the required integrations analytically. To overcome this problem, the integral equation is discretized and then the computations for any higher order accelerated scheme can be performed. This more powerful approach is essential for complex problems requiring highly accurate solutions and is presented here. The initial discretization of the integral equation is characterized by a positive integer M which generally has to be large in order to achieve a satisfactory accuracy. In the conventional approach, this discretized version of the integral equation is transformed to an M × M system of algebraic equations. In the accelerated projection methods, the discretized version of the integral equation is projected onto an N-dimensional subspace with N being chosen much smaller than M. The acceleration methods then lead to N × N algebraic systems of equations thus reducing by orders of magnitude the computational effort required. For a given N, to achieve a desired accuracy of solution to the discretized version of the integral equation, a sufficiently high order of acceleration scheme needs to be utilized.

The discrete model for Eq. 3.1 obtained using piecewise constant product integration is

M I f f I x - d / } y ( t * ) M > I , (3.4) y ( x ) = g ( x ) +A • t[-'* , j = l ~ t i - I

where t~ = - 1 + 2 j / M and t* = - 1 + ( 2 j - 1 ) / M denote the endpoints and midpoint, respectively, and ei(t) are the characteristic functions, for the integral (tj_l, t~]. With the notations

and

%(x) = fti Ix_t l - . dt tj_ l

(3.5)

Since

QKMy ( x) = ( K M - PKM) y( x), (3.9)

from Eqs. 3.6 and 3.8 it follows that

therefore

QKMy (x )=A Y'~ a j ( x ) - Y'~aj(x*)ei(x ) y ( t*) , j = l i=1

M

QKMy(x ) = A ~_, 6~(x)y( t*) , (3.10) j = l

M

KMy(x ) = A ~_~ a i ( x ) y ( t * ) , (3.6) j = l

Eq. 3.4 can be written as

y ( x ) = g ( x ) +KMy(x ). (3.7 /

The projection operator P as applied to Eq. 3.6 yields

,] PKMy(x ) =A Y'~ a j ( x * ) y ( t * ei(x ). (3.8) i=1 j =

Page 6: Parallel solution of Fredholm integral equations of the second kind by accelerated projection methods

l l l 0 R. Aggarwal et al.

where N

t~j(X) = OQ(X) -- E a j ( x * ) e i ( x ) . ( 3 . 1 1 ) i=1

Similarly, the function g(x) is discretized. Since

Qg =g -Pg (3.12)

and

therefore

N

Pg(x) = ~ g(x*)ei(x ), (3.13) i=1

N

Qg(x) = g ( x ) - ~ g(x*)ei(x ). (3.14) i=1

4. Parallel implementation

Most of the computational work in numerical implementation of accelerated projection methods is involved in computing the coefficients for a small system of algebraic equation. The solution of the system itself takes a small fraction of the total computation time. The setup costs take most of the computer time. Moreover, the fraction of the time spent in solving the system shrinks further as the order of the accelerated scheme increases. There- fore, it is appropriate here to parallelize the computations required to set-up the algebraic system whereas the solution of the system itself can be performed in serial. However, for massively parallel machines even a small serial part of the computations can become a bottleneck and then the solution of the algebraic system should also be parallelized. The numerical results are obtained on a Balance 21000 shared memory multiprocessor system using SCHEDULE parallel programming package [9]. This is a Multiple Instruction Multiple Data (MIMD) machine where each processor is capable of working independently on its own data.

The first step in parallel programming is to identify the data dependencies, i.e. the situations where one process reads some shared data that another process writes (in the global memory). Figure 1 shows a typical data dependency graph where the process at node 8 is dependent on data written by the processes at nodes 1-5. Therefore, the execution of the process at node 8 cannot start until the execution of processes at nodes 1-5 has been completed and their shared variables written in the global memory. Based on these data dependency requirements the program is divided into several units of computation; or levels. Figure 2 is a full data dependency graph for parallel implementation of the accelerated schemes. All the computations have been arranged into three separate levels. The computa- tions for the preceding level(s) should be completed before moving to a higher level. Each of these level consists of one or more loops:

d o i = l , N d o j = l , M

(work) enddo

enddo

Page 7: Parallel solution of Fredholm integral equations of the second kind by accelerated projection methods

Parrallel solution of Fredholm integral equations 1111

Fig. 1 . A typical data dependence graph where the nodes represent processes and the arcs show the dependencies among them.

The work in these loops is then split into a specified number of processes. Each such process is assigned an identification tag and its associated with a list of dependency requirements. When all the dependency requirements for a process have been satisfied, it is allocated and executed on a physical processor. The work is split simply as:

do i = indexl, index2 d o j = l , M

(work) enddo

enddo

where the variables ' indexl ' and 'index2' take values depending upon the number of processes to be created. This loop splitting technique divides the work into a number of divisions. Each process contains one or more such divisions. If N is not an integral multiple of the number of processors, there would be at least one process with an extra division. In addition, all divisions may not have exactly the same number of floating point operations. This

Level 3

* ~ ,1~- "tl~ - ~ ~ Level 1 Fig. 2. A full data dependency graph with three computational levels for parallel implementation of the accelerated

projection schemes.

Page 8: Parallel solution of Fredholm integral equations of the second kind by accelerated projection methods

1112 R. Aggarwal et al.

leads to what is known as load unbalance. Wherever possible, the loops have been reordered so as to create more divisions and thereby achieving a more uniform distribution of the work among the processes. A better load balance can also be achieved by splitting both inner and outer loops. However, that would require additional communication and synchronizations.

There are mainly two types of operations involved in setting-up the algebraic system for accelerated schemes: (1) functional evaluations at integration and nodal points, (2) matrix-vector and matrix-matrix products.

These computations are efficiently parallelized by a proper description of their dependency requirements and then splitting the work into independent processes as described above.

5. Results

In this section computational experiments are described which demonstrate that memory contention, communication delays, and load balancing do not hinder an efficient parallel implementation of accelerated projection methods. The experiments were performed on a Balance 21000, which is a shared memory multiprocessor; up to 10 processors were utilized.

System (3.4) with M = 1000 was used as a test problem. The choice of M is such that the solution of (3.4) is an acceptable approximation to the solution of the original integral equation (3.1). For M = 1000, the exact solution of (3.4) is obtained by solving an algebraic system of size 1000 × 1000. Direct solution of such a large system would be very costly. Consequently, projection methods are used to construct approximate solutions to (3.4). Parallel programming techniques as described in Section 4 have been used to implement the methods efficiently.

Computational experiments demonstrate that accelerated projection methods possess a high degree of inherent parallelism which, like the accuracy of these methods, increases with increasing order. Speedup measurements confirm parallel efficiency. Measurements of serial run-time confirm the rapid rate of convergence.

5.1. Speedup results

The efficiency of a parallel algorithm depends on data dependencies, communications and synchronization costs. It also depends on how the computational work load is distributed among the processors. A well-balanced program distributes the work load uniformly so that all the processors are kept busy with minimum idle time.

Speedup, the ratio of the execution time using one processor to the execution time using p processors, is a widely used experimental measure of performance for parallel algorithms. The linear graph shown in Fig. 3 depicts an ideal speedup curve. The curve for a highly inefficient program, one that is unbalanced or slowed by memory or communication bottlenecks, lies well below the linear graph.

Experimentally determined speedup curves for parallel first, second and third order acceleration schemes are shown in Fig. 3. The projection methods (2.9), with n = 1, 2, 3, employ the piecewise constant interpolator (3.2) with N = 50. Each method required the setup and solution of a 50 x 50 system. Work load in the solution phase was the same for each method. However, the number of computations required to set up the systems increased with increasing order. Since each system was small, the solution phase was performed sequentially on one processor while the setup was performed in parallel on several processors.

It is apparent from the graphs in Fig. 3 that speedup improves with increasing order. The parallel first order scheme exhibits a weak speedup which degrades rapidly with an increase in

Page 9: Parallel solution of Fredholm integral equations of the second kind by accelerated projection methods

ParraUel solution of Fredholm integral equations 1113

10 I I 1 I I I I 1 /

9 -- linear --- third order J , ~ ' "--" second order / ;'/

e -- first order ~ /j,,J,>

?

6

/ ~a /.'-"J 1 1 c~ S J CO ~: J .I

1_I-

Z / "///~

1 I I I I I I I I

Z 3 4 5 6 7 8 9 10

NUMBER OF PROCESSORS

Fig. 3. Speedup curves for first, second and third order acceleration schemes used to construct the solution of (3.4)

with M = I000. Each scheme employcd (3.2) with N = 50, and required the parallel setup and scrial solution of a

50 x 50 system.

the number of processors. The parallel second and third order schemes exhibit dramatic improvement in their performance, with speedups that are nearly linear. Although no attempt has been made to assess complexity, it is believed that the rate of growth of total computation with increasing order greatly exceeds the rate of growth of communication requirements. Thus as the order of the accelerated method increases, the communication time reduces to a small fraction of the total execution time.

Inspection of Fig. 3 indicates that the higher order schemes show nearly the same speedup when utilizing up to 10 processors. However, performance of any of the methods will deteriorate if a sufficiently large number of processors is used. The higher order schemes show nearly the same speedup because the performance of the second order method does not begin to deteriorate until more than 10 processors are used. Finally, the small variation in speedup exhibited by the higher order schemes in Fig. 3 is due to a nonuniform distribution of the computational work load. The work load is most evenly balanced when M = 1000 and N = 50 are integer multiples of the number of processors.

5.2. Timing results

Practitioners wish to use available resources efficiently, but they also wish to complete their task quickly. Consequently, the computation time required to achieve a given accuracy is an important indicator of utility.

Several experiments were run to determine the accuracy and serial run-time for the first, second and third order schemes employing the projection (3.2) with various values of N. The results are tabulated in Table 1. The error at the grid point x = 0.999 is used to estimate the

Page 10: Parallel solution of Fredholm integral equations of the second kind by accelerated projection methods

1114 R. Aggarwal et al.

Table 1 Timing and error estimates for the first, second and third order acceleration schemes used to construct the solution of (3.4) with M = 1000. The predicted value of the solution at x = 0.999 was used to estimate the error. The approximate solution obtained from a tenth order scheme with N = 100 was taken as the exact solution y(0.999)= 0.6674540. The calculations were performed on a DEC3100 serial machine

Order N Time (sec.) Yn (x = 0.999) Error

1 100 29.6 0.6372015 3.02525E-02 1 300 287.7 0.6542087 1.32453E-02 1 500 820.2 0.6604951 6.9589E-03 1 700 1768.7 0.6642384 3.2156E-03 1 900 5370.7 0.6665849 8.691E-04 2 200 1365.5 0.6684402 - 9.862E-04 2 250 1759.3 0.6682076 - 7.536E-04 3 16 177.6 0.6668777 5.763E-04

accuracy of each method. For M = 1000, this grid location is closest to the end point x = 1 where the solution of (3.1) is singular, and local error is greatest. Error estimates based at other grid points away from the ends x = + 1 are less conservative. A tenth order acceleration scheme with N = 100 was used to estimate the exact solution of (3.4) with M = 1000 at x = 0.999. The calculations were performed on a DEC3100 serial machine.

The results shown in Table 1 illustrate that execution time can be significantly reduced by increasing the order of the projection scheme. For instance, to achieve an error of 8.69 x 10 -4 the first order method required the setup and solution of a 900 x 900 system and over 5370 seconds of CPU time. The second order scheme achieved a comparable accuracy - 7.54 × 10 -4 in 1759 seconds by solving a system of size 250 × 250. The third order method was most impressive. It achieved an accuracy of 5.67 × 10 -4 in only 178 seconds, or one-thirtieth the time required by the first order scheme. The third order calculation required the setup and solution of a 16 x 16 system. So it is much more advantageous to use a higher order scheme leading to a small algebraic system rather than solving a huge first order system.

It is concluded that the accelerated projection methods are both computationally advanta- geous and well-suited for parallel computing.

Acknowledgement

The authors acknowledge the Advanced Computing Research Facility, Mathematics and Computer Science Division, Argonne National Laboratory, on whose machines the computa- tions for this research were performed.

References

[1] R. Aggarwal, Parallel solution of integral equations, Eng. Sc. D. Thesis, Columbia University, New York, 1991. [2] R. Aggarwal, D.R. Dellwo and M.B. Friedman, Parallel solution of nonlinear integral equations using acceler-

ated projection methods, to appear. [3] K. Atkinson, The numerical solution of Fredholm integral equations of the second kind, SIAM J. Numerical

Anal. 4 (1967) 337-348. [4] E. Babolian and L.M. Delves, Parallel solution of Fredholm integral equations, Parallel Comput. 12 (1989)

95 - 106. [5] C.T.H. Baker, The Numerical Treatment oflntegral Equations (Clarendon Press, Oxford, 1977). [6] D.R. Dellwo, Accelerated refinement with applications to integral equations, SIAMJ. NumericalAnal. 25 (1988)

1327-1339.

Page 11: Parallel solution of Fredholm integral equations of the second kind by accelerated projection methods

ParraUel solution of Fredholm integral equations 1115

[7] D.R. Dellwo and M.B. Friedman, Accelerated projection and iterated projection methods with applications to nonlinear integral equations, SIAM J. Numerical Anal. 28 (1991) 236-250.

[8] D.R. Dellwo, M.B. Friedman and R. Aggarwal, Computational aspects of accelerated projection methods for nonlinear integral equations, Proc. Second Internat. Conf. on Integral Methods in Science and Engineering, Arlington, TX (1990).

[9] J.J. Dongarra and D.C. Sorensen, SCHEDULE users guide, Argonne National Laboratory, Argonne, 1987. [10] M.B. Friedman and D.R. Dellwo, Accelerated projection methods, J. Computat. Phys. 45 (1982) 108-126. [11] M.B. Friedman and D.R. Dellwo, Accelerated quadrature methods for linear integral equations, J. Integral

Equations 8 (1985) 113-136. [12] M.A. Golberg, Numerical Solution of Integral Equations (Plenum Press, New York, 1990). [13] Y. Ikebe, The Galerkin method for the numerical solution of Fredholm integral equations of the second kind,

SIAM Rev. 13 (1972) 465-491. [14] J.G. Kirkwood and J. Riseman, J. Chemical Physics 16 (1948) 565-573. [15] M.A. Krasnosel'skii et al., Approximate Solution of Operator Equations (Wolters-Noordhoff, Groningen, 1972). [16] R. Kress, Linear Integral Equations (Springer, New York, 1989). [17] G. Miel, Parallel solution of Fredholm integral equations of the second kind by orthogonal polynomial

expansions, Applied Numerical Math. 5 (1989) 345-361. [18] J.L. Phillips, The use of collocation as a projection method for solving linear operator equations, SIAM J.

Numerical Anal. 9 (1972) 14-28. [19] R. Piessens and M. Branders, Numerical solution of integral equations of mathematical physics using Chebyshev

polynomials, J. Computat. Phys. 21 (1976) 178-196. [20] I.H. Sloan and B.J. Burn, Collocation with polynomials for integral equations of the second kind: A new

approach to the theory, J. Integral Equations 1 (1979) 77-94.