18
An Automatic Program for Linear Fredholm Integral Equations of the Second Kind KENDALL ATKINSON University of Iowa Two automatic programs for solving linear Fredholm integral equations of the second kind are described and illustrated. It is assumed that the kernel function and solution are smooth and that they are given analytically, not as discrete data. The numerical method is based on the Nystr6m method, with an lteratlve technique to solve the resulting linear systems. The main discussion centers on Simpson's method as the numerical integration rule. A powerful variant based on Gaussian quadrature m also discussed, and tests for ill-conditioned problems have been incorporated into the program. Modifiability of the Simpson program is also discussed. The Algorithm: Algorithm 503, An Automatic Program for Fredhohn Integral Equations of the Second Kind. ACM Trans. Math. Software 2, 2 (June 1976), 196-199. Key Words and Phrases: numerical analysis, linear integral equations, automatic algorithm, Nystrbm method CR Categories: 5.18 1. INTRODUCTION In this paper two automatic programs for solving Fredholm integral equations of the second kind, b X(S) -- Ja K(s,t)x(t)dt = y(s), a <_ s <_ b, (1.1) are described and illustrated. It is assumed that the kernel function K (s, t) and the forcing function y(s) are continuous; and for a practical rate of convergence they should be several times continuously differentiable. In addition, the equation is assumed to have a unique solution. See [-13] for a complete theory of eq. (1.1). There is a large literature on numerical methods for solving eq. (1.1). A survey of general methods is given in E3], and a large bibliography is given in ['11]. For applications of Fredholm equations to physical problems, also see the many entries in El1"~; many boundary value problems for partial differential equations can be reformulated in the form of eq. (1.1). But in spite of the large literature on the numerical solution of eq. (1.1), almost nothing has been written on the automatic Copyright (~) 1976, Association for Computing Machinery, Inc. General permission to republish, but not for profit, all or part of this material is granted provided that ACM's copyright notice is given and that reference is made to the publication, to its date of issue, and to the fact that reprinting privileges were granted by permission of the Association for Computing Machinery. A version of this paper was presented at Mathematical Software II, a conference held at Purdue University, West Lafayette, Indiana, May 29-31, 1974. Author's address: Department of Mathematics, University of Iowa, Iowa City, IA 52242. ACM Transactions on Mathematical Software, VoL 2, No. 2, June 1976, Page~ 154-171

An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

  • Upload
    doannhu

  • View
    228

  • Download
    0

Embed Size (px)

Citation preview

Page 1: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

An Automatic Program for Linear Fredholm Integral Equations of the Second Kind

KENDALL ATKINSON

University of Iowa

Two automatic programs for solving linear Fredholm integral equations of the second kind are described and illustrated. It is assumed that the kernel function and solution are smooth and that they are given analytically, not as discrete data. The numerical method is based on the Nystr6m method, with an lteratlve technique to solve the resulting linear systems. The main discussion centers on Simpson's method as the numerical integration rule. A powerful variant based on Gaussian quadrature m also discussed, and tests for ill-conditioned problems have been incorporated into the program. Modifiability of the Simpson program is also discussed.

The Algorithm: Algorithm 503, An Automatic Program for Fredhohn Integral Equations of the Second Kind. ACM Trans. Math. Software 2, 2 (June 1976), 196-199.

Key Words and Phrases: numerical analysis, linear integral equations, automatic algorithm, Nystrbm method CR Categories: 5.18

1. INTRODUCTION

In this paper two automatic programs for solving Fredholm integral equations of the second kind,

b X(S) -- Ja K ( s , t ) x ( t ) d t = y ( s ) , a <_ s <_ b, (1.1)

are described and illustrated. I t is assumed that the kernel function K (s, t) and the forcing function y(s) are continuous; and for a practical rate of convergence they should be several times continuously differentiable. In addition, the equation is assumed to have a unique solution. See [-13] for a complete theory of eq. (1.1).

There is a large literature on numerical methods for solving eq. (1.1). A survey of general methods is given in E3], and a large bibliography is given in ['11]. For applications of Fredholm equations to physical problems, also see the many entries in El1"~; many boundary value problems for partial differential equations can be reformulated in the form of eq. (1.1). But in spite of the large literature on the numerical solution of eq. (1.1), almost nothing has been written on the automatic

Copyright (~) 1976, Association for Computing Machinery, Inc. General permission to republish, but not for profit, all or part of this material is granted provided that ACM's copyright notice is given and that reference is made to the publication, to its date of issue, and to the fact that reprinting privileges were granted by permission of the Association for Computing Machinery. A version of this paper was presented at Mathematical Software II, a conference held at Purdue University, West Lafayette, Indiana, May 29-31, 1974. Author's address: Department of Mathematics, University of Iowa, Iowa City, IA 52242.

ACM Transactions on Mathematical Software, VoL 2, No. 2, June 1976, Page~ 154-171

Page 2: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

An Automatic Program for Linear Fredholm Integral Equations • 155

solution of eq. (1.1) ; see ['9, p. 248"]. The only examples seem to be the work of Elliot and Warne [10] and the work of Delves [-8].

In the following sections a method is described which is based on the Nystr6m method, with an iterative technique to solve the resulting linear systems. The theo- retical foundation is summarized in Section 2. In Section 3 an algorithm, I E S I M P , is described which is based on using Simpson's rule as the numerical integration rule in the Nystrom method; numerical examples illustrating the method are given in Section 4. The program I E G A US, which is based on Gaussian quadrature, is described in Section 5, and numerical examples are given in Section 6. The con- cluding remarks in Section 7 include a discussion on the modifiability of I E S I M P . A Fortran listing of the introductory comments from both I E S I M P and I E G A U S is contained in ~4~.

2. THEORETICAL FOUNDATIONS

Assume we have a numerical integration rule

f f ( t ) d t ~ w, ,n f ( t j , . ) 7--1

which converges as n --~ oo for a l l f C C[a, bJ. Approximate eq. (1.1) by

x~(s) -- ~ w~.~K(s, t~,n)x~(t~.~) = y ( s ) , a < s < b. (2.1)

In solvability this is equivalent to the linear system

xn(t,,n) -- ~ w j , . K ( t .... t~,n)x.(t~,n) = y(t . ,~), i = 1 . . . . , n. (2.2)

To each solution of eq. (2.2), there is a unique corresponding solution of eq. (2.1) with which it agrees at the node points. This is attained by using eq. (2.1) as an interpolation formula; just solve for x~ (s) in terms of y (s) and the values x. (t,.~), 1 < i < n. This interpolation formula is quite good, and in practice

max Ix(t, , ,) -- x,(t, , ,) I ~ max Ix(s) -- x,(s) ], l ~ i ~ n a~s~b

e.g. see eq. (3.1) for Simpson's rule. In the standard manner, write eqs. (1.1) and (2.1) in operator notation as

( Z - ~ ) x = y, ( I - Y~n)x~ = y,

respectively. The error analysis for this numerical method is quite complete; e.g. see [-1] and [31. If one is not an eigcnvalue of 3¢, then ( I - ~n) -1 exists and is uni- formly bounded for all sufficiently large n, say n >_ N. Moreover,

x -- x . = ( I - ~ . ) - ~ ( ~ - - ~ ) x

= ( Z - - ~ ) - ' ( ~ - - ~ . ) x - - ( I - ~ . ) - ~ ( I - ~ ) [ ( I - ~ ) - ' ( 3 ¢ - ~ . ) ] ~ z ,

n > _ N . (2.3)

This can be used to construct theoretical bounds on the rate of convergence and asymptotic error estimates.

ACM Transaetlons on Mathematleal Software, Vol 2, No. 2, June 1976

Page 3: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

156 Kendall Atkinson

The linear system (2.2) may be solved directly, or it may be solved by an itera- tive variant of the Nyst rom method [-5]. We use an iterative method first pre- sented by Brakhage [-7]. I t is assumed that eq. (2.2) can be solved directly for some "small" value of n; and we will use this to solve eq. (2.2) iteratively for the case of m node points, m > n, Given xm (°), define for ~ >__ 0,

r,, (~) = y - ( I - 3 ¢ , ~ ) x m (~), x~ (~+1) = x, , (~) + E1 ~- ( I - - ~ ) - ~ m J r m (~). (2.4)

Since we want to solve for just x~ (~+1) (t,.m), 1 < i < m, we specialize the method.

rm (~) (L.m) = y (t,,,~) - x,~(')(L.m) ~- ~ w~.mK(t . . . . t~,m)x,~ (~) (t~.,,),

i = 1 , 2 , . . . , m .

Then

Xm('+1)(t,.m) = X,,(')(t,,m) + rm(v)(t,.m) -~ ~(L.m), i = 1, 2 , . . . , m,

with ~ (t,.n) satisfying

~(t,.~) - ~ w3.nK( t . . . . t~.n)~(t~,n) = ~ w~.mK(t . . . . t~.,~)r,~(v)(t~,m), i = 1 , . . . , n.

Finally, using eq. (2.1) as an interpolating formula,

. (~) t /_, w~.~K(t~,m, t~.~)~(t~.n), ~--1 ~ 1

i = 1 , 2 , . . . , m .

Based on the work in [-5"] and [-7], this method can be expected to have a regular geomteric rate of convergence, i.e.

I] x~ -- x~ (~+1) ]1/]1 x~ -- x~ (~) ]l ~ constant as ~ --~ ~ ,

provided n is sufficiently large. As n -~ ~ , the constant goes to zero. For an initial guess, take the last computed answer, say X q ( t ~ q ) , i -~- 1 , . . . , q,

for some n ~ q < m, and use the interpolating formula of eq. (2.1) to obtain xm(°)(t,.,~), i = 1, . . . , m :

q

x~(°)(t,.,~) = y(t, , ,~) -~ ~ w ~ . q K ( t . . . . t , ,q)x~(t j ,q) , i = 1, . . . , m . (2.5) ~ 1

In spite of its greater complexity, this formula is far superior to other forms of interpolation for the development of an automatic program. With other interpola- tion methods, the errors xm (°) (t,.m) - xm (L,m), ~ = 1, . . . , m, were far worse than with eq. (2.5). As a consequence, more iterates had to be computed, ult imately requiring more time than if eq. (2.5) had been used originally.

3. A FIXED ORDER PROGRAM

Simpson's rule with n node points will be the numerical integration rule used in eq. (2.1), n >_ 3 odd. Modifications for other numerical integration rules will be discussed in Section 7.

ACM Transactions on Mathematzcal Software, Vol 2, No 2, June 1976

Page 4: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

An Automatic Program for Linear Fredholm Integral Equations 157

3.1 Theoretical Rate of Convergence

If K (s, t ) x (t) is five times continuously differentiable and has an integrable sixth derivative, then combining eq. (2.3) with standard error results for Simpson's rule gives

x ( s ) - xn(s) = "y(s)h 4 + O(h~) ,

( I - - X)~/(s) = -- (1/180) [03K (s, t ) x (t)/Ot3]~:~ (3.1)

with h = ( b - a ) / ( n - 1 ) . If K (s, t ) x (t) has fewer than six derivatives, then other expressions for the error in Simpson's rule can be combined with eq. (2.3) to yield results like eq. (3.1), but with a lower rate of convergence when the number of derivatives is less than four.

3.2 Outline of Program

The program is divided into two stages according to whether or not the iterative method is being used. In stage A, ( I - 3¢,)xn = y is solved directly; and an a t tempt is made to solve ( I - ~ m ) x ~ = y iteratively with m = 2 n - 1 . If the rate of con- vergence is sufficiently rapid, or if n cannot be increased any further, then stage B is entered. Otherwise, n is replaced by m and stage A is repeated. In stage B, n is fixed, m is increasing, and the equation (I-3~m)Xm = y is solved iteratively with eq. (2.4). There is a continual monitoring of the rate of convergence of the itera- t i re method so as to avoid a variety of possible failures. Also, every a t tempt is made to keep running time to a minimum. This involves (a) carefully constructed asymptotic error estimates, and (b) storage of all information (where practicable) which may have to be reused later. A flowchart of the main program I E S I M P is given in Figure 1. For the introductory comments, see [-4].

3.3 Error Prediction

There are two error situations to be concerned with: (a) knowing when an iterate xm (~) is sufficiently accurate with respect to xm(s) , and (b) knowing when x,,,(s) is sufficiently accurate with respect to x (s). In the following discussion, the norm is the maximum vector norm. Also, the somewhat incorrect notation xm/2 refers to the solution with double the mesh size of the solution xm.

(a) Accuracy of xm (~). For the iterative solution of ( I - ;E,~)x ,~ = y, the initial guess xm (°) is obtained from the previous solution xm/2 (or in some cases, xn), as discussed in the paragraph containing eq. (2.5). Then xm (1) is calculated with eq. (2.4), and

D E N R 1 : = II x,, (1) - x,~ (°) II.

Then a second iterate xm (~) is computed, and

N U M R 1 : = II xm (2) - x~ (1) II, R1 : = N U M R 1 / D E N R 1 .

The ratio R1 measures the geometric rate of convergence of the iteration method, the existence of which is justified in [-5"]. Using R1, special tests are performed in stages A and B.

A C M Transactions on MathematlcaI Software, Vol 2, No 2, June 1976

Page 5: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

158 Kendall Atkinson

Inltzal±zatlon

LOOP=I R2=0.5 RIRAT=0.25 Z> M=2N-I STAGE

Calculate TN(1), A WN(I),I=I . . . . . N. ~ < i

ii L0oi2 eupsjstem R2 = (I-K n)x n = ~2/~ ,ENR2 IIx~- Xn/211 Y

and solve.

ao I , I n~ ~-----J~ . ) - ~ WM(I ) I=i M ~ Calculate

- - ' ~ / Valeula 'e :m I ! m m

IReturn ! ~ V

\ ~et~rn / [ IN °

( S t a g e ~ _ _ ~ SaveV~nformat ~_on ~ . ~ / R1 ~ _ _ ~ = _ I \ ' / "~ I f o ~ p ~ - yo~". . . R1RAT / No- l ~ooP:LooF+1

Fig. 1. Flowchart of the program IESIMP

In stage A, in which m = 2 n - 1 , a test is made to check whether the speed of convergence is sufficient for entrance to stage B. If

R1 <_ ~'-R-T, R T : = m i n { R A T I O , R2}, (3.2)

then the speed of convergence is sufficient for entrance to stage B. This require- ment will usually insure that only two iterates need to be calculated in stage B, for each value of m. The number R A T I O is the theoretical rate at which the error in xm should decrease when m is doubled to 2 m - 1 ; for I E S I M P , R A T I O = 1/16. R2 is the computed rate of decrease in the error for xm; it is discussed below in Sec- tion 3.3 (b).

If eq. (3.1) is not satisfied, then the results of the iteration are discarded, n : - m, m : = 2 m - 1, and stage A is repeated. If in calculating R1, the routine was already in stage B, then R1 is checked to make sure it remains adequate. If R1 > C U T O F F , then an abortive return is made to stage A because the rate R1 is considered inade- quate. At present, C U T O F F -- 1/2, a reasonable but arbi t rary choice.

To check for the accuracy in the most recently computed iterate xm (2), we ask whether R1 satisfies

II x ~ - x~(~)II -< II • - x ~ II. ( 3 . 3 ) A C M Transactions on Mathematmal Software, Vol 2, No 2, June 1976.

Page 6: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

An Automatic Program for Linear Fredholm Integral Equations • 159

(

~ TEST : =

I-RI R2 R1 Y~2 lJxm-~~12 II

Yes

Calculate x (2) m

Compute NUMRI and RI.

I x(O)=x(Z) x(1)_x(2)t

Retrieve informa-

tlon for return

to Stage A.

No ( Calculate ERROR

ICalculate

x~ O) from Xml2~

Using ITERT Calculate x(1) x(2)

m • m

M = 2M-I Error

Calculate ~ Yes \ , MUPPER/ v TM~WM ~ ~ Return

Calculate

NUMRI,DENRI~ RI

Fig. 1. continued

This is to insure that only the needed accuracy in xm is actually computed. The test is calculated using the approximations

]1 x,~ -- x,~ (~) I] ~ R 1 / ( 1 -- R1) I] xm (~) - x J 1) I],

]l x - - x,,, ]] ~ R T / (1 - - R T ) ]] xm (2) -- x,~l~ ]].

The first est imate uses the geometric rate of convergence of x~ (~) to x~. The second estimate is based on a similar geometric convergence of xm to x; see Section 3.3 (b)

ACM Transactions on Mathematical Software, Vel. 2, No. 2, June 1976.

Page 7: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

160 Kendall Atkinson

below for more detail. The test eq. (3.3) then becomes

N U M B 1 <_ T E S T := r(1 - R1) /RI ' ]ERT/ (1 -- RT)'] [I x,~ (2) - x,,/~ I[. (3.4)

If this is satisfied, then we take x~ : = x,~ (2). Otherwise, x,~ (°) = x,~ (1), x~ (1) = x~ (2), a new iterate xm (2) is compqted, and the above testing is repeated. If eq. (3.2) is satisfied, then usually only two iterates need be computed, the minimum number.

(b) Accuracy of xm. The variable R2 is a computed rate at which the error in x~ decreases when m is "doubled." Initially R2 : -- 0.5, and it is never allowed to be greater than 0.5. For each computed value of xm (or x~ in stage A), define

N U M B 2 = [I xm - x,,/2 [I

and let DENR2 be the previous value of N UMR2, if any. Then for IFLGR2 = O,

R2 : = max {RA TIO, N U M R 2 / D E N R 2 I,

subject to the earlier limitation R2 _~ 0.5. Then the error in xm is approximately

ERROR : = ER2/ (1 - R2) JNUMR2; (3.5)

for relative error, divide by [I xm I I. For the special case of y (s) and K (s, t) periodic on Ca, b'] with respect to s and

t, the rate of convergence will generally be much faster than indicated in eq. (3.1). For such cases, set IFLGR2 = 1, and the routine will set

R2 := N U M R 2 / D E N R 2 ,

which can be arbitrarily small. For the error in x~, use eq. (3.5) when R2 > RATIO; and for R2 <: RATIO, multiply the expression in eq. (3.3) by 2, an empirically determined "fudge factor." The use of IFLGR2 = 1 for such peri- odic cases will result in more accurate error estimates, as the later numerical ex- amples will demonstrate.

3.4 Structure of IESIMP

Refer to C4~ for the parameters used in I E S I M P . Most are self-explanatory, based on the comments given there. The routine I E S I M P needs temporary storage space for matrices and vectors. The amount of space needed depends on how large n and m can become. The user must specify upper limits, called N U P P E R and M U P P E R ; and an appropriate amount of temporary storage space is delivered to I E S I M P in the one-dimensional array W.

To make the program less complicated and more modular, use was made of a number of subprograms.

(a) T W I C E is a one-line function in I E S I M P for "doubling" N or M. (b) W A N D T sets the node points and weights. (c) I N T E R P carries out the NystrSm interpolation formula defined in eq. (2.5). (d) I T E R T computes one iterate when an initial guess is given to it. (e) R N O R M calculates norms of vectors and of their differences. (f) L I N S Y S solves a system of linear equations. I t uses scaled partial pivoting;

and it has a residual correction option, which is used in IEGA US. The routines (c), (e), (f) are also used in the routine IEGA US, described in

Section 5, without any changes; the routine I T E R T is almost the same. They can

ACM Transact ions on Mathemat ica l Software, Vol. 2, No 2, June 1976

Page 8: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

An Automatic Program for Linear Fredholm Integral Equations

Table I. Characteristic Values for Case (1)

161

b = l b = 2 b = 3

-2 .8654 1.4278 - 1 4142 1.4142 -1 .4142 (multlplic- 1.4142 (multiplicity ity = a) = 4)

-2040.1 43.313 -1 .4612 1.4146 -1 .4159 1.4822 -15 .619 2 5847 -2 .4924

also be used with a wide choice of numerical integration schemes. Some modifica- tions of I E S I M P are described in Section 7.

4. NUMERICAL EXAMPLES OF SIMPSON PROGRAM

The integral equation b £

x(s) -- ~ Ja K( s , t ) x ( t ) d t = y(s) , a < s < b, (4.1)

was solved for a variety of kernel functions K (s, t), right-hand functions y (s), and parameters k and desired error EPS. These same equations will be used as some of the examples for the Gaussian program IEGA US.

Case (i) :

K ( s , t ) =cos(Trst), O <_s, t <_ b.

The dominant characteristic values (reciprocals of eigenvalues) of the integral operator are given in Table I. Considered as a function of t, the frequency of oscilla- tion in K(s, t) increases as s is increased; for b = 3, there are from 0 to 9 changes of sign as s varies from 0 to 3. To increase the oscillatory behavior, y (s) is so chosen that x(s) = e" cos(7s), 0 ~ s g b. The numerical results are given in Table II ; N Z is the initial number N of node points.

Case (ii):

K ( s , t ) = c/['c 2+ ( s - t )~] , 0_<s , t<__ 1, c > 0 .

This kernel is increasingly peaked as c ~ 0; and II ~ ]1 "--* ~r as c --~ 0. For the peak- ing of the kernel, Kma,,/Kmm = 1 ~ 1/c ~.

Table II. I E S I M P Results for Case (l)

Error Final

~, b Desired Predmted Actual N Z M

1.0 1.0 1.0E - 6 9.90E - 7 8.50E - 7 5 65 1.0 2.0 1.0E - 6 2.92E - 7 2.91E - 7 9 257 1.43 1.0 1 . 0 E - 3 2 . 1 1 E - 4 2 . 1 0 E - 4 5 65 1.43 1.0 1 . 0 E - 5 8 . 1 8 E - 7 8.17E - 7 9 257

-2000 .0 1.0 1.0E - 5 8.69E - 6 8.86E - 6 9 257 - 1 . 4 2 2.0 1 . 0 E - 5 3 . 5 8 E - 6 3 . 5 7 E - 6 9 257

ACM T r a n s a c t i o n s o n Mathematical Software, Vol. 2, No 2, June 1976.

Page 9: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

162 Kendall Atkinson

For c = .1, the dominant characteristic vMues are

~, -- .39080, .52477, .70210, .94428.

For the numerical example, pick y (s) so tha t

x ( s ) = s 2 - . 8 s + . 0 6 , 0 < s < 1.

We use c = .1, and the above ratio of peakedness becomes 101. The numerical results are given in Table I I I .

Case (iii) :

K ( s , t ) = ( 1 - - ~2)/[1 + ~ ( 2 - - 2~,cos2~r(s~-t)], 0_< s, t_< 1, 0_< ~, < 1.

This is the well-known kernel function arising f rom solving the Dirichlet problem for Laplace 's equation on an ellipse, using an integral equation reformulation. For ~, > 0, the characteristic values and vectors are

), ffi 7 -~, x ( s ) = e o s ( 2 j ~ s ) , j = 0, 1, 2, . . .

X = - . y - i , x ( s ) = sin(2jxs), j = 1, 2, 3, . . . .

As ~ --* 1, the kernel is increasingly peaked, and K ~ , a x / K m i , = (1-}-~')2/(1-~') 2. For the numerical examples, we use more than one unknown function. The results are given in Table IV.

Notice tha t the use of I F L G R 2 = 1 resulted in more accurate error estimates, bu t not in a lower amount of computat ion. The reason for this is tha t the jump in the error between the final M and the preceding one is quite large, often by factors of 104. Also note tha t in the case x (s) = e- ' , the use of I F L G R 2 = 1 leads to an inaccurate estimate, by about a factor of 3. Although the amount of computa t ion t ime was not reduced, if smaller error tolerances had been requested, there would have been reductions in some cases.

Case (iv) :

/ - - s ( 1 - - t ) , 0 < s < t 'g . 1, K ( s , t ) = ~ - t ( 1 - s ) , 0 < t < s < 1.

This is Green's function for x " ( s ) = g ( s ) , 0 < s < 1, x (O) = x(1) = 0. Since K (s, t) is not continuously differentiable, we cannot expect an 0 (h 4) order of con-

Table III . I E S I M P Results for Case (n), c = 0.1

Error Final

X Desired Predmted Actual N Z M

• 30 1 .0E-2 7 .31E-5 2 .90E-5 9 33 .30 1 .0E-5 1.86E-6 1.15E-6 9 65 • 30 1.0E - 8 4 . 49E - 9 4.39E - 9 9 257 .52 1 .0E-5 5.63E - 7 5.48E - 7 9 129

ACM Transactions on Mathematmal Software, VoL 2, No 2, June 1976.

Page 10: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

An Automatic Program for Linear Fredholm Integral Equations

Table IV. I E S I M P Results for Case (lil)

163

z(s)

Error F i n a l

~ Desired Predmted Actual IFLGR2 N Z M

cos (21rs) 1.1 .5 1 . 0 E - 5 2 . 9 3 E - 6 3 . 0 4 E - 9 O 9 65 cos(2vs 1.1 .5 1 . 0 E - 5 4 . 4 8 E - 9 3 . 0 4 E - 9 1 9 65 cos (2~rs) 1.1 .8 1 0 E - 5 7 . 6 8 E - 7 8 . 5 3 E - 8 0 9 257 cos (2~s) 1 1 .8 1 . 0 E - 5 1 . 0 8 E - 7 8 . 5 3 E - 8 1 9 257 cos (2~rs) 1.999 .5 1 . O E - 5 3 . 6 0 E - 7 1 . 6 0 E - 9 Q 9 129 cos (2~rs) 1.999 .5 1 . 0 E - 5 3 . 3 8 E - 9 1 . 6 0 E - 9 1 9 129 cos (6~rs) - 1 . 0 .8 1 . 0 E - 5 3 . 6 1 E - 8 1 . 0 9 E - 1 0 0 9 257 cos (6,rs) - 1 . 0 .8 1 . 0 E - 5 8 . 1 3 E - 1 0 1 . 0 9 E - 1 0 1 9 257 e- ' 1.9 .5 1 . 0 E - 5 5 . 4 1 E - 6 5 . 7 9 E - 6 0 9 65 e -~ 1.9 .5 1 . 0 E - 5 2 . 0 7 E - 6 5 . 7 9 E - 6 1 9 65

vergence. Empirically, the rate is 0 (h2), and this can be proven. The characteristic values are ~ = - (n~r) 2, n = 1, 2, . . . . The integral equation is equivalent to the self-adjoint problem

x ' ( s ) - - ~,x(s) = y ' ( s ) , 0 ~_ s ~ 1, x (O) = x(1) = 0.

For the numerical example, use x ( s ) = r~s r (1 - - s ) , 0 < s • 1, r _> 1. The numeri- cal results are given in Table V. In the table, I E R is an error flag set in I E S I M P . I E R = 0 means a successful return; and I E R = 1 means a failure, bu t the pre- dicted error is given for the final values. For greater detail, see the listing of I E S I M P contained in E4].

In all the above cases, many of the choices of k were quite close to a characteristic value of the integral operator. The effect does not seem as significant as might have been expected. Peaking in the kernel has a more marked effect, bu t still the routine handles relative peaking factors of up to 50 to 100 without difficulty.

With all examples the routine has been very reliable. When it has predicted too small an error, it has never done so by a factor of more than 2 (except for incor- rectly setting I F L G R 2 = 1 as in case (iii) above with x (s) = e - ' ) . Nonetheless, the routine should probably not be asked for errors of greater than 10 -3, a region in which the asymptot ic error formulas are not as likely to be effective.

Table V. 1 E S I M P Results for Case (iv)

Error Final

X r Desired Predicted Actual N Z M I E R

--10.0 5 1.OE --5 1 . 0 2 E - 3 9.95E - 4 9 257 1 30.0 3 1 . 0 E - 4 7 . 7 1 E - 5 4.49E - 5 5 257 0

- 3 0 . 0 3 1 . 0 E - 4 1.35E - 4 1.11E - 4 5 257 1 - 3 0 0 5 1 . 0 E - 4 2 6 7 E - 4 2 2 6 E - 4 5 257 1

ACM Transactions on Mathematmal Software, Vol. 2, No 2, June 1976.

Page 11: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

164 • Kendall Atkinson

5. A PROGRAM BASED ON GAUSSIAN QUADRATURE

This program, named IEGA US, is organized in much the same way as I E S I M P , the principal difference being the use of Gauss-Legendre quadrature in place of Simpson's rule in the approximation (2.1). Since the programs are quite similar, this discussion will emphasize the differences. The program IEGA US has been streamlined for speed and efficiency, and cannot be easily modified, except for the use of Gauss-Leguerre quadrature on ra, ~ ) and Gauss-Hermite quadrature on

If the integrand K (s, t)x (t) is analytic in t, a _< s _< b, then the quadrature error decreases more rapidly than 0 (1/np) for any p > 0. Consequently a theo- retical rate R A T I O is not used, and the value IFLGR2 = 1 is built into the pro- gram. R2 is calculated as before, but it is limited to the interval .0001 ~ R2 _< .5. The values of n and m are increased by doubling, a convenient but otherwise arbi- t rary choice. But this means that N U M R 2 / D E N R 2 can be extremely small, and it has been found necessary, empirically, to impose the above lower limit on R2 in order to avoid misleadingly small error predictions. I t has also meant tha t the error prediction mechanism must be more sophisticated and complicated than in I E S I M P .

The routine always begins with N = 2, although this can be easily reset to some higher power of 2. The values of N and M are always powers of 2; and the Gauss- Legendre nodes and weights for N = 2, 4, 8, 16, 32, 64, 128, 256, taken from [-14], are stored in W A N D T . For N a power of 2 greater than 256, a composite rule is used on ['a, b] with the 256 node case used on each subinterval of I-a, b].

The error formula is given by eq. (3.5) ; but there is a lower limit, given by ap- proximating the following:

II x - x , ~ l l / l l x II >-- R E L M I N . (5.1)

The number R E L M I N is constructed to recognize: (i) the limit of machine pre- cision, (ii) the increasing effect of ~ rounding error as M increases, and (iii) possible ill-conditioning in the integral equation which would limit the attainable accuracy. The ill-conditioning can result from any of several sources. The main ones appear to be: (i) oscillating kernel functions which lead to large loss of significance errors in the evaluation of the integral numerically; (ii) letting ~ approach a characteristic value of eq. (4.1) ; and (iii) letting X tend to ~ in eq. (4.1) so as to obtain essentially an integral equation of the first kind.

To explain the construction of R E L M I N , we must examine the solution of the linear systems in the program, say Az = b, with A of order N. In every case, we obtain a solution z (°) to the system by Gaussian elimination; usually, an LU de- composition has been calculated previously and saved. Define the residual r = b - Az (°). Solve the residual correction equation Aw = r by elimination, and define z (1) = z (°) + w (°), with w (°) the computed value of w. Generally, w (°) will have no significant digits with respect to w; but it will have the correct magnitude, and this is all that is needed.

I t is a standard result tha t

1 IJ r J[ [J z - - z (°) J[ II r Jl I IAIl I IA- ' I I I Ib17 < Iizll <11AIIIIA-11111bI ,

ACM Transactmns on Mathematical Software, Vol 2, No 2, June 1976.

Page 12: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

An Automatic Program for Linear Fredholm Integral Equations 165

and [] A [] J l A-1 II is a commonly used condition number for A. This motivates the definition

C O N D = max{1,1 'z(1)-z~°)" . I ' r , } ] l z<" I I Jib " ( 5 , 2 )

For stability in the calculations, this value is averaged geometrically with the last such value. Thus the value of C O N D is continually changing as N and M change. Empirically, it is quite stable for well-conditioned problems; and it generally grows with N for badly ill-conditioned problems. This ad hoc scheme has proved effective in detecting those ill-conditioned problems for which ]] ~11 >> 1 in eq. (1.1).

Define R E L M I N = max { R E L 1 , R E L 2 }, (5.3)

with

R E L 1 = C O N D * U N I T R D * M ~ v / M , R E L 2 = (M/N)312*][ z (1) - z (°) I[/11 z(l> 1[.

(5.4)

In this definition, M is the order of the linear system currently being examined, and N is the order of the LU decomposition currently being used in solving the order M system. Always, M >__ N; and M = N means that iteration is not being used. The numbers z ¢°~ and z ~ were described earlier. The number U N I T R D gives the machine precision; it is the smallest number u for which 1 q- u > u > 0 in the computer. In double precision arithmetic on an IBM 360 machine, U N I T R D = 2.22 X 10 -~6. The n u m b e r R E L 2 has been useful in detecting limitations on the accuracy as X approaches a characteristic value, in eq. (4.1).

The construction of R E L M I N is a mixture of mathematical intuition plus em- pirical testing. I t has worked reliably, as some of the following examples will show. Nonetheless, much more work needs to be done on this problem of detecting and quantifying ill-conditioning.

Because of the greater complexity of the program, two new subprograms are used besides those used in I E S I M P . In the earlier list, delete T W I C E and add the following:

(g) C O N E W is a function used in computing the condition number C O N D for computing R E L M I N .

(h) L E A V E is a subroutine for setting parameters when leaving I E G A US. Since the Legendre nodes are not a convenient set of node points, the user can specify his own node points at which he would like solution values. These are computed in L E A V E by NystrSm interpolation.

6. NUMERICAL EXAMPLES

For I E G A U S the first four equations are the same as for the program I E S I M P , and four additional equations are included so as to explore more completely the behavior of the program. In all cases, N U P P E R = 32 and M U P P E R = 256. In the program, if the desired error E P is too small compared to R E L M I N , then E P is reset accordingly; an error indicator is set with I E R . See the listing of I E G A U S in [-4~ for the meanings of each value of I E R .

ACM Transactions on Mathematmal Software, Vol 2, No. 2, June 1976.

Page 13: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

166 Kendall Atkinson

Table VI. IEGA US Results for Case (1)

Error Final

X b Desired Predicted Actual N M

1.4 1 .0 1 . 0 E - 3 5 . 3 E - 1 1 1 . 6 E - 14 8 16 1.4 2.0 1 0 E - 3 6 . 3 E - 1 4 3 . 3 E - 1 4 16 32 1.4 3 0 1 . 0 E - 3 2 . 4 E - 4 9 . 4 E - 1 4 32 32 1.4 4.0 1 . 0 E - 3 3.0E -13 5 . 7 E - 14 32 64

-2000.0 1.0 1 . 0 E - 5 4.8E - 8 9 . 6 E - 13 8 16 -1 .42 2.0 1 . 0 E - 5 6 . 2 E - 14 2.7E -14 16 32

1.48 3.0 1 . 0 E - 8 3 . 4E - 13 5.5E -14 32 64

Case (i) :

K ( s , t ) = c o s ( ~ s t ) , x( t ) = e t cos (7 t ) , O <_ s,t <_ b.

The numerical results are given in Table VI. I F L A G : = 1 for all cases, so all errors are relat ive errors.

Case (ii) :

K ( s , t ) -- c / [ d - } - ( s - t ) 2 ] , x( t ) = t 2 - 0.St -}- .06 , 0_< s,t <_ 1.

I n all cases, I F L A G = 0; all errors are absolute. The results are given in Table VI I . To examine il l-conditioning arising when I k [ ---* oo in eq. (4.1), we have the

following calculations, given in Table V I I I . I n all cases, I F L A G = O, and thus the errors are absolute. T he value of c is 1, so the kernel funct ion is ve ry smooth , with no significant peaking. The letter e denotes the possibly readjus ted value of the desired error, based on a value of R E L M I N which is larger t h a n the requested relat ive error.

Case (iii):

K ( s , t ) -- (1 - "y2)/['l -t-~,2 _ 2ycos2~-(s-l- t) ' ] , x( t ) = cos(2~'rt), 0 _< s,t <_ 1.

The numerical results are given in Table IX .

Case (iv) :

= ~- - s ( i - - t ) , s ~ t ~ x(t) = 25is( i - - t ) , 0 < s , t <~ 1. K ( s , t ) ( - t ( 1 - s ) , t _< s ) ' - -

Table VII. IEGAUS Results for Case (ii), c = 0.1

Error Fmal

X Desired Predicted Actual N M

.30 1 . 0 E - 3 1 .7E - 5 7 . 9 E - 7 16 32

.30 1 . 0 E - 8 1 . 3 E - 9 2 . 2 E - 1 0 16 64 • 5 2 1 . 0 E - 6 7 . 0 E - 9 1 . 4 E - 10 32 64

- 1 0 . 0 1 .0E - 8 2 . 5 E - 1 3 2 . 8 E - 1 4 32 128

ACM Transactions on Mathemahcal Software, Vol 2, No. 2, June 1976.

Page 14: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

An Automatic Program for Linear Fredholm Integral Equations

T a b l e VIII. IEGAUB Results for Case (ii), e -- 1.0

167

Error Final

Desired Predicted Actual N M COND E R E L M I N

- 1 . 0 - 1 . 0 E + 2 -1 .0Eq-4 -1.0E-l-6 -1 .0Eq-8 -1.0Eq-10

1 . 0 E - 8 1 .3E-14 5 .4E-14 4 16 1.2 1 . 0 E - 8 1 .7E-14 1 . 0 E - 8 1 .3E-12 2 .4E -14 8 16 1.SEq-1 1 . 0 E - 8 2 .6E-13 1 . 0 E - 8 9 .4E-11 2 .4E -12 8 16 1.6Eq-3 1 . 0 E - 8 2 .3E-11 1 . 0 E - 8 4 . 2 E - 9 2 .8E -10 16 16 6.2Eq-4 1 . 0 E - 8 8 .8E-10 1 . 0 E - 8 1 . 6 E - 7 2 . 6 E - 8 32 32 1.5Eq-7 1 . 6 E - 7 6 . 2 E - 7 1 . 0 E - 8 4 . 9 E - 6 2 . 1 E - 6 32 32 4.7Eq-8 4 . 9 E - 6 1 . 9 E - 5

Table IX. IEGA US Results for Case (iii)

Error F i n a l

3' X Desired Predicted Actual N M

.5 1.999 1 . 0 E - 5 3 . 8 E - 6 1 . 5 E - 8 32 64

.8 1.1 1 . 0 E - 5 9 . 8 E - 8 1 . 1 E - 7 32 256 • 8 1.1 1 . 0 E - 3 3 . 3 E - 5 1 . 4 E - 5 32 128 .8 - 1 . 0 1 . 0 E - 3 1 . 8 E - 6 3 . 5 E - 8 32 128 .8 - 1 . 0 1 . 0 E - 5 1 . 8 E - 6 3.5E - 8 32 128

Table X. IEGA US Results for Case (iv)

Error F i n a l

X Desired Predmted Actual N M

-- 1 0 . 0 1 . 0 E - 4 1 . 4 E - 3 1 . 4 E - 3 32 256 -30 .0 1 . 0 E - 4 2.2E - 4 2 . 1 E - 4 16 256

The numerical results are given in Table X. For this example the Simpson p rogram was superior in its reaction to the "s ingular i ty" along the line t --- s. B u t bo th ex- amples show clearly the need for some form of p roduc t integrat ion, p robab ly of the type given in [6 ] .

Case (v) :

K ( s , t ) = e~'t, x ( t ) = e ~t, 0 < s,t < 1.

The dominan t characterist ic values for two cases are given in Table XI . For in- creasing #, the integral operator ( ~ x ) (s) = [1 o ea*'x(t)dt, x E C[a, b] has in- creasing norm II ~ J] = (ea - 1 ) / 8 ; and for a fixed ~, say X = 1, the integral equa- t ion x ( s ) - X fx o ea"x ( t )d t = y ( s ) will behave increasingly like an equat ion of the first kind. To accentua te the difficulties for large fl, choose a m u c h less t h a n 8. This will cause large loss of significance errors to occur in the calculations. This example is originally due to Delves I"81.

ACM Transactions on Mathematical Software, Vol. 2, No. 2, June 1976.

Page 15: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

168

V

Kendall Atkinson

Table XI. Characteristic Values for Case (v)

• 5 $ ffi20

• 06357 8.121E - 8 • 8284 5.866E - 6

4.798 1.884E-4 40.69 3.618E-3

449.1 4.657E - 2

The numerical results are given in Table XI I ; I F L A G = 1 in all cases, and thus all errors are relative.

Case (vi) :

K ( s , t ) = 1 / ( ~ + t - b O . 1 ) , O ~ _ s , t < : 1.

This is a nonsymmetric kernel, and the dominant characteristic values are X = 1.0340, 6.4359, 49.71, 379.7. The kernel is peaked, with Kmax/Kmln = 31. The true solution function is x ( s ) = %/(I s -- c ]). The numerical results are given in Table XI I I .

Case (vii): K (s , t ) = ( t - s)r, O ~ s,t <_ 1.

The solution is x ( s ) = s-ln (s). The characteristic values are purely imaginary if r is odd, and they are bounded from below by r + 1. The numerical results are given in Table XIV.

Cases (vi) and (vii) show the good performance of the program when the un- known function has a singularity in the first derivative, located at an endpoint of the interval of integration. But case (vi) with x (s) = %/([ s -- .2 [) shows the bad performance which occurs when the singularity is not at an endpoint. This is be- cause Gaussian quadrature performs badly in such situations. This is further era- phasized with case (iv) in which the singularity occurs in the kernel rather than the unknown function.

Case (viii) :

K ( s , t ) = [10/ (e 1 ° - 1)]e 5('+o, 0_< s,t < 1.

The right-hand function is fixed independently of k, y (s) -- s, 0 "~ s • 1; and the resulting solution is

x ( s ) = s + [k / (1 - ~,)][(4e 5n a 1) / 2 5 ] [ 1 0 / (e ~ ° - 1)]e 5', 0 < s < 1.

The only finite characteristic value is ~ = 1, and the corresponding eigenfunction is e 5".

The numerical results are given in Table XV; I F L A G = 1, and thus all errors are relative• The values of R E L M I N come mainly from R E L 2 in eq. (5.4) rather than from R E L 1 ; the values of C O N D remain approximately 1.0 in all cases, in- cluding the worst behaved cases. This was typical behavior for similar examples which have been run for ), very near a characteristic value for a fixed right-hand

ACM Transact ions on M a t h e m a t i c a l Software, Vol 2, No. 2, June 1976

Page 16: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

An Automatic Program for Linear Fredholm Integral Equations

Table Xl I . IEGA US Results for Case (v)

169

Error Final

X Desired Predicted Actual N M e R E L M I N

5.0 1.0 1.0 1 . 0 E - 8 3 . 0 E - 1 3 3 . 1 E - 1 5 8 16 1 . 0 E - 8 3 . 0 E - 1 3 5.0 5.0 1.0 1 . 0 E - 8 1 . 6 E - 1 1 4 . 2 E - 1 5 8 16 1 . 0 E - 8 8 . 0 E - 1 4 5.0 20.0 1.0 1 . 0 E - 8 4 4 E - 1 3 1 . 3 E - 1 4 16 32 1 . 0 E - 8 4 4 E - 1 3

20 0 1.0 1.0 1 0 E - 8 2 . 2 E - 7 1 . 3 E - 8 32 32 2 . 2 E - 7 2 . 2 E - 7 20 0 1.0 5 . 0 E - 7 1 . 0 E - 8 2 7 E - 1 3 3 7 E - 1 5 32 32 1 . 0 E - 8 2 7 E - 1 3 20 0 20.0 1.0 1 0 E - 8 8 . 3 E - 8 7 . 0 E - 9 32 32 8 . 3 E - 8 8 . 3 E - 8 20.0 20.0 5 . 0 E - 7 1 . 0 E - 8 6 . 8 E - 1 4 5 . 3 E - 1 5 16 32 1 . 0 E - 8 5 . 0 E - 1 4

Table XIII . IEGA US Results for Case (vi)

Error Final

h Desired Predicted Actual N M

0.0 0.0 0.2

2.0 1 . 0 E - 5 2 . 6 E - 6 2 . 5 E - 6 16 64 50.0 1.0E - 4 1 . 8 E - 5 1 .BE - 5 16 256 2.0 1 0 E - 5 2 . 6 E - 4 6 . 8 E - 4 16 256

Table XIV. IEGA US Results for Case (vn)

Error Final

Desired Predicted Actual N M

1 1

10

1.0 1.0E - 5 2 . 8 E - 6 2.7E - 6 4 16 100.0 1 . 0 E - 8 3 . 7 E - 9 3 7 E - 9 16 128

10 0 1 0 E - 8 7 6 E - 1 0 7 . 6 E - 1 0 4 256

Table XV. IEGA US Results for Case (vni)

Error Final

Desired Predmted Actual N M R E L M I N

1.1 1.01 1.001 1.0001 1. 00001 1.000001 1 0000001 1.00000001

1.0E - 6 1 . 9 E - 11 5.4E - 1 3 8 16 1.4E - 1 4 1 0 E - 6 1 6 E - 1 0 4 . 8 E - 1 3 8 16 1 . 4 E - 1 4 1 . 0 E - 6 1.6E - 9 1 . 4 E - 13 8 16 2 .2E - 1 3 1 . 0 E - 6 2 8 E - 8 3 . 0 E - 1 2 8 16 1 . 4 E - 1 2 1.0E - 6 8.8E - 1 1 1.6E - 1 0 16 32 8.8E - 1 1 1.0E - 6 8 . 8 E - 10 1 . 5 E - 9 16 32 8 .8E - 1 0 1 . 0 E - 6 9 5 E - 9 1 . 4 E - 8 16 32 9 . 5 E - 9 1 . 0 E - 6 1 . 1 E - 7 1 . 5 E - 7 16 32 1 . 1 E - 7

ACM Transactions on l~.athemahcal Software, Vol 2, No 2, June 1976.

Page 17: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

170 • Kendall Atkinson

function y (s). The value of R E L M I N should be somewhat larger in some cases, but it is off by no more than a factor of 2.

From a large number of examples (4.1), the ill conditioning in which X approaches very closely a nonzero characteristic value is quite different from the ill condi- tioning arising when I X I --* oo. In the latter case, COND --* oo as I X [ --* ~ ; but in the former case, COND remains about one, while 11 z(l) - z(°)II/11 z(t)II becomes increasingly larger; refer to the earlier discussion of R E L M I N for definitions.

7. CONCLUDING REMARKS

Because of some remarks in some of the papers in ['122, an attempt has been made to make the Simpson program easily modifiable. Modifications for other numerical integration methods are quite straightforward. The user must change the param- eters RATIO and ROOTRT, the subroutine W A N D T , and possibly the parameter CUTOFF and the one-line function TWICE. Two possibilities are discussed.

(1) Higher order Newton-Cotes composite formulas. As an example, consider implementing Boole's rule with its 0 (h 6) order of convergence. Let NZ - 1 be divisible by 4; and let RATIO = 1/64, ROOTRT = 1/8. In W A N D T , change the weights to the composite form of

ft: ' f ( t ) d t ~ (2h/45)['7f(xo) -b 3 2 f ( x , ) 1 2 f ( x 2 ) + 32f (xa )+ 7f(x4)]. +

This has been done, and the results were mixed. For badly behaved cases with de- sired error tolerance not too small, say E P S > 1.0E-5, the Simpson method is likely to be more efficient. But Boole's rule is clearly superior for smoother kernel functions and/or smaller error tolerances.

(2) Singularities in the unknown solution x (s). For cases in which x (s) is known to have a singularity in a low-order derivative at a known point a in CA, B'], the choice of node points should be skewed so as to put relatively more node points near a than in other parts of the interval. For a properly skewed distribution (vary- ing for each x (s)), the rate of convergence should be comparable to that of Simp- son's rule for a smooth function.

Other modifications can be made for systems of integral equations and multi- dimensional integral equations. But this will require more extensive changes in I T E R T and I N T E R P . See [5] for an example of the iterative method with a two- dimensional equation.

Research is continuing on an "adaptive" program for handling equations in which x(s) has a singularity in a low-order derivative, and the location of the singularity is unknown. A mechanism is needed to discover the approximate loca- tion of a singularity, and for the skewing of the future node points which are intro- duced. These problems appear solvable, but much experimentation is required to test the adaptive mechanism. Once a good program has been written, we will in- vestigate the extension of it to kernels with weak singularities, using the ideas of product integration given in C2].

All the numerical examples were computed on an IBM 360-65 in double precision arithmetic. The modification of the programs for use in other machines should be straightforward.

ACM Transac t ions on Mathematical Software, Vol. 2, No. 2, June 1976.

Page 18: An Automatic Program for Linear Fredholm Integral ...homepage.math.uiowa.edu/~atkinson/papers/TOMS1976.pdf · An Automatic Program for Linear Fredholm Integral Equations of the Second

An Automatic Program for Linear Fredholm integral Equations • 171

REFERENCES

1. ANSELONE, P.M. Collectwely Compact Operator Approximation Theory. Prentice-Hall, Engle- wood Cliffs, N. J., 1971.

2. ATKINSON, K.E. The numerical solution of Fredholm integral equations of the second kind. SIAM J. Numer. Anal. ~ (1967), 337-348.

3. ATKINSON, K.E. A Survey of Numerical Methods for the Solution of Fredholm Integral Equations of the Second Kind. SIAM Publ., Philadelphia, Pa. (SIAM Monograph), 1976.

4. ATKINSON, K.E. Algorithm 503. An automatic program for Fredholm integral equations of the second kind. ACM Trans. Math. Software 2, 2 (June 1976), 196-199 (introductory comment listing only); complete listing m "Collected Algorithms from ACM," and also available from ACM Algorithms Distribution Service, Houston, 'rex. 77036.

5. ATKINSON, K.E. Iterative variants of the Nystrom method for the numerical solution of integral equations. Numer. Math. ~ (1973), 17-31.

6. ATKINSON, K.E. The numerical solution of Fredholm integral equations of the second kind with singular kernels. Numer. Math. 19 (1972), 248-259.

7. BRAKHAGE, H. t~ber dm numensche Behandlung von Integralgleichungen nach der Quad- raturformel methode. Numer. Math. g (1960), 183-196.

8. DELVES, L.M. An automatic Ritz-Galerkm procedure for the numerical solution of hnear Fredholm integral equations of the second kind. Submitted for publication.

g. DELVES, L.M., AND WALSH, J., EDS. Numerical Solution of Integral Equations. Clarendon Press, New York, 1974.

10. ELLIOT, D., AND WARNE, W. An algorithm for the numerical solution of hnear integral equations. Int. Comput. Cent. Bull. 6 (1967), 207-224.

11. NOBLE, B. A bibhography on methods for solving integral equations, author listing and subject bstmg. Tech. Reps. 1176 and 1177, Math. Res. Cent., U. of Wisconsin, Ma&son, Win., 1971.

12. RICE, J., ED., Mathematical Software. Acadermc Press (ACM Monograph Series), New York, 1971.

13. RIEsz, F., ANn Sz.-NAQY, B. Functwnal Analysis, 2nd ed., translated by L.F. Boron. Ungar, New York, 1955.

14. STROUD, A., AND SECREST, D. Gaussian Quadrature Formulas. Prentice-Hall, Englewood Cliffs, N.J., 1966.

Received June 1974; revised July 1975

ACM Transactions on Mathematzcal Software, Vol 2, No. 2, June 1976.