18

Click here to load reader

On convergence and performance of iterative methods with fourth-order compact schemes

Embed Size (px)

Citation preview

Page 1: On convergence and performance of iterative methods with fourth-order compact schemes

On Convergence and Performance of IterativeMethods with Fourth-Order Compact SchemesJun Zhang∗

Department of MathematicsThe George Washington UniversityWashington, D. C. 20052

Received December 31, 1996

We study the convergence and performance of iterative methods with the fourth-order compact discretizationschemes for the one- and two-dimensional convection–diffusion equations. For the one-dimensional prob-lem, we investigate the symmetrizability of the coefficient matrix and derive an analytical formula for thespectral radius of the point Jacobi iteration matrix. For the two-dimensional problem, we conduct Fourieranalysis to determine the error reduction factors of several basic iterative methods and comment on theirpotential use as the smoothers for the multilevel methods. Finally, we perform numerical experiments toverify our Fourier analysis results. c© 1998 John Wiley & Sons, Inc. Numer Methods Partial Differential Eq 14:263–280, 1998

Keywords: Convection–diffusion equation; iterative methods; fourth-order compact discretization schemes

I. INTRODUCTION

We first consider the one-dimensional (1D) convection–diffusion equation:

−uxx(x) + p(x)ux(x) = f(x), x ∈ (a, b), u(a) = g1, u(b) = g2. (1)

This equation often appears in the description of transport phenomena. The magnitude of p(x)determines the ratio of the convection to diffusion.

The 1D problems are not difficult from a computational point of view; we consider thembecause the algebra is more transparent than for higher dimensions and because, in most cases,analytical results are very difficult to obtain for higher dimensions.

There are various ways to discretize Eq. (1). In the context of finite differences, the mostfamiliar schemes are the central differences and the so-called upwind differences [1]. The first-

∗Current address: Department of Computer Science, University of Minnesota, Minneapolis, MN 55455. e-mail:[email protected]© 1998 John Wiley & Sons, Inc. CCC 0749-159X/98/020263-18

Page 2: On convergence and performance of iterative methods with fourth-order compact schemes

264 ZHANG

order and second-order central difference operators are

du

dx≈ uj+1 − uj−1

2h,

d2u

dx2 ≈uj+1 − 2uj + uj−1

h2 , (2)

where h is the uniform mesh size. These central difference operators have a truncation error ofO(h2). In addition, the forward and backward upwind difference operators are

du

dx≈ uj+1 − uj

h,

du

dx≈ uj − uj−1

h, (3)

which have a truncation error of O(h).If the central difference operators (2) are used to discretize Eq. (1), we have

− uj+1 − 2uj + uj−1

h2 + pjuj+1 − uj−1

2h+ ξj = fj , (4)

where ξj is the local truncation error at the grid point j, deriving from Taylor series analysis,

ξj =h2

12

[d4u

dx4 − 2p(x)d3u

dx3

]j

+O(h4). (5)

The traditional central difference scheme (CDS) for Eq. (1) is obtained by dropping ξj in (4).The truncation error of the scheme at the grid point j is obviously O(h2), corresponding to theleading term in (5).

CDS yields a 3-point formula and basic iterative methods, e.g., the point Jacobi and Gauss–Seidel, for solving the resulting system of linear equations do not converge when the convectiveterm dominates and the cell Reynolds number (defined below) is greater than a certain constant.Therefore, the first-order upwind scheme is usually used, which is to apply the central differenceoperator (2) to the second-order term and the forward (or backward, depending on the sign ofp(x) at the grid point in question) upwind operator (3) to the first-order term of Eq. (1). Theresulting scheme is also a 3-point formula and has a truncation error of O(h). The merit of theupwind scheme is that it can suppress oscillation in the approximation and basic iterative methodswith it converge for all cell Reynolds numbers; the drawback is its low order of accuracy.

Recent studies by Brandt and Yavneh [2], and Zhang [3] indicate that the first-order upwindand the second-order central difference schemes may yield unreliable computational results forsome convection-dominated flow problems. On the other hand, there has been growing inter-est in developing fourth-order finite difference schemes for the convection–diffusion equation(and the Navier–Stokes equations), which have good numerical stability and yield high accuracyapproximations, see [4–8] and the references therein. In particular, Gupta et al. [6] proposeda fourth-order compact finite difference scheme for approximating the two-dimensional (2D)convection–diffusion equation and showed numerically that the scheme is both highly accurateand computationally efficient. Basic iterative methods with this scheme have been shown (nu-merically) to converge for large values of the convection coefficients. In Zhang [9], we analyzedthe convergence of some basic iterative methods with this scheme with small Reynolds numbers.A systematic development of the high-order compact schemes was investigated by Spotz [10].

In this article, we study the convergence and performance of some iterative methods with thefourth-order compact schemes (FOCS) for the one- and two-dimensional convection–diffusionequations. Specifically, we consider the symmetrizability of the coefficient matrix and the spectralradius of the point Jacobi iteration matrix for the 1D problem. For the 2D problem, we conductFourier analysis to determine the error reduction factors of several basic iterative methods withFOCS and comment on their potential use as the smoothers for the multilevel methods.

Page 3: On convergence and performance of iterative methods with fourth-order compact schemes

FOURTH-ORDER COMPACT SCHEMES 265

This article is organized as follows. In Section II we study the 1D problem. The 2D problem isstudied in Section III. Numerical examples are given in Section IV to verify our Fourier analysisresults. Concluding remarks are included in Section V.

II. ONE-DIMENSIONAL PROBLEM

The basic idea behind the high-order compact scheme approach is to find compact approximationsto the derivatives in (5) by differentiating the governing Eq. (1). This gives the new truncationerror expression (see [10] for details):

ξj =h2

12

[(pj+1 − pj−1

h− p2

j

)uj+1 − 2uj + uj−1

h2

+(pj+1 − 2pj + pj−1

h2 − pj(pj+1 − pj−1)2h

)uj+1 − uj−1

2h

− fj+1 − 2fj + fj−1

h2 +pj(fj+1 − fj−1)

2h

]+O(h4). (6)

Equation (6) can be used to increase the accuracy of our approximation to a truncation error ofO(h4) in (4) and still retain its compactness.

For simplicity, we consider the case where the convection coefficient p(x) ≡ p is a constantand the domain is the unit interval (0, 1).

A. Symmetrizability

Although the symmetrization of the coefficient matrix is not necessary for implementing basiciterative methods, the symmetrizability of a matrix is a nice algebraic and numerical propertyand is closely associated with many acceleration techniques that may be used to accelerate theconvergence of the basic iterative methods, see Hageman and Young [11] for details. Anothernice property related to the symmetrizability is that a similarity transformation does not changethe eigenvalues of a matrix.

The following lemma concerns symmetrizing a tridiagonal matrix B by a diagonal similaritytransformation and its proof can be found in Elman and Golub [12].

Lemma 2.1. Let bj , aj , and cj be real and B = tri[bj , aj , cj ]. There exists a real (non-singular) diagonal matrix Q with Q−1BQ a real symmetric matrix if and only if for each j(1 ≤ j < m), either bj+1cj > 0 or bj+1 = cj = 0 holds. The symmetrized matrix istri[(bjcj−1)1/2, aj , (bj+1cj)1/2].

FOCS for Eq. (1) is obtained by substituting (6) into (4) and truncating the O(h4) terms. Inthe case where p is a constant, this results in a linear system with the coefficient matrix

H = tri

[−(

1 +δ2

3+ δ

), 2(

1 +δ2

3

),−(

1 +δ2

3− δ)]

, (7)

where δ = ph/2 is referred to as the cell Reynolds number. By Lemma 2.1,H can be symmetrizedfor all δ as

Page 4: On convergence and performance of iterative methods with fourth-order compact schemes

266 ZHANG

H = tri

[√1− δ2

3+δ4

9, 2(

1 +δ2

3

),

√1− δ2

3+δ4

9

]. (8)

B. Spectral Radius

We analyze the point Jacobi method for solving the linear system with the coefficient matrix H .The proof of the following lemma, which holds even for bc ≤ 0, can be found in [12].

Lemma 2.2. The eigenvalues of the tridiagonal matrix tri[b, a, c] of order m are

λj = a+ sign(c)2√bc cos(jπ/(m+ 1)), j = 1, . . . ,m.

Theorem 2.1. The spectral radius of the point Jacobi iteration matrix for the FOCS matrices(7) and (8) is

%(H) =√δ4 − 3δ2 + 9δ2 + 3

cos(πh).

Proof. The point Jacobi iteration matrix for the FOCS matrix (8) is

tri

−√

1− δ2

3 + δ4

9

2(1 + δ2

3

) , 0,−√

1− δ2

3 + δ4

9

2(1 + δ2

3

) ,

and its eigenvalues are determined as in Lemma 2.2. %(H) is obtained from %(H) by the fact thattwo similar matrices have the same eigenvalues.

It follows from Theorem 2.1 that the point Jacobi method with the 1D FOCS converges for allcell Reynolds numbers because the spectral radius of the point Jacobi iteration matrix is strictlyless than 1.

Figure 1 depicts the spectral radius of the point Jacobi iteration matrix with the 1D FOCS forh = 1/100. We note that although the method converges for all δ, the convergence deterioratesas δ →∞.

It can be verified that FOCS is a second-order approximation to the differential operator

−(

1 +p2h2

12

)uxx + pux.

A heuristic explanation for the good numerical properties of FOCS is that it corresponds to aperturbation of Eq. (1) in which ‘‘artificial viscosity’’ proportional to the square of the cellReynolds number is added so that the main diagonal elements of the coefficient matrix areenhanced (see Segal [13]). However, unlike the upwind operators, this is done without sacrificingthe accuracy of the approximating scheme.

III. TWO-DIMENSIONAL PROBLEM

We now consider the 2D convection–diffusion equation satisfying the Dirichlet boundary condi-tions

uxx(x, y) + uyy(x, y) + p(x, y)ux(x, y) + q(x, y)uy(x, y) = −f(x, y), (x, y) ∈ Ω,

u(x, y) = g(x, y), (x, y) ∈ ∂Ω, (9)

Page 5: On convergence and performance of iterative methods with fourth-order compact schemes

FOURTH-ORDER COMPACT SCHEMES 267

FIG. 1. Spectral radius of the point Jacobi iteration matrix with the 1D FOCS for h = 1/100.

where Ω is a 2D smooth convex domain. p(x, y) and q(x, y) are the convection coefficients. Whenthe magnitudes of the convection coefficients are small, Eq. (9) is said to be diffusion-dominated,otherwise it is convection-dominated.

Several fourth-order compact schemes for Eq. (9) (and the Navier–Stokes equations) havebeen designed by Dennis and Hudson [4], Gupta et al. [5, 6], Li et al. [7], Spotz and Carey [8].All these schemes may look slightly different, but all reported numerical results are similar. Theconvergence of these schemes with the basic iterative methods has been verified numerically,but rigorous justification of convergence and systematic study on the performance of these high-accuracy schemes with fast iterative methods are in their very early stages [3, 9, 14].

Again, we consider the case where the convection coefficients p(x, y) and q(x, y) are constants.The specific scheme that we study in this article was developed by Gupta et al. [5, 6]. We omitthe details of deriving the scheme, but give the linear equation at an internal grid point as

8∑k=0

αkuk =h2

2[(8f0 + f1 + f2 + f3 + f4) + δ(f1 − f3) + γ(f2 − f4)], (10)

where the coefficients αk, k = 0, . . . , 8, are described by the 9-point compact stencil α6 α2 α5α3 α0 α1α7 α4 α8

=

−(1− δ)(1 + γ) −2(1 + γ)2 − 2 −(1 + δ)(1 + γ)−2(1− δ)2 − 2 20 + 4δ2 + 4γ2 −2(1 + δ)2 − 2−(1− δ)(1− γ) −2(1− γ)2 − 2 −(1 + δ)(1− γ)

. (11)

Here δ = ph/2 and γ = qh/2 are the cell Reynolds numbers.

Page 6: On convergence and performance of iterative methods with fourth-order compact schemes

268 ZHANG

FIG. 2. Contours of the error reduction factor of the point Gauss–Seidel iteration matrix (ρPG S (θ1 , θ2 ))with δ = γ = 10−2 .

It was proved in [9] that FOCS with point Jacobi and Gauss–Seidel methods converges for|δ| ≤ 1 and |γ| ≤ 1 and the spectral radius of the line Jacobi iteration matrix with FOCS isbounded by (17 + 20

√2)/73 when |δ| = |γ| = √2.

In the limit of pure diffusion (δ, γ → 0), (11) (and all other FOCS for the 2D convection–diffusion equation) reduces to the Mehrstellen operator. Multilevel methods with the Mehrstellenoperator have been studied by Schaffer [15], Gupta, Kouatchou and Zhang [16].

A. Fourier Analysis

In practice, we are interested in the performance of the iterative methods when the cell Reynoldsnumbers are large. To this end, we perform Fourier analysis to gain an idea of the behavior ofseveral basic iterative methods with FOCS. Our methodology and notations are similar to thoseused by Kettler [17] and Wesseling [18]. A detailed treatment of Fourier analysis of iterativemethods for elliptic problems can be found in Chan and Elman [19]. Fourier smoothing analysisrelated to the multilevel methods was studied by Stuben and Trottenberg [20].

We assume a square domain Ω = (0, 1)× (0, 1), periodic boundary conditions, and a uniformfinite difference stencil. Let there be given a uniform grid G:

G = (x, y) ∈ Ω : (x, y) = (j1h, j2h), j1, j2 = 0, 1, 2, . . . , n, h = 1/n.

The grid points are numbered lexicographically (following the x-axis and then the y-axis).

Page 7: On convergence and performance of iterative methods with fourth-order compact schemes

FOURTH-ORDER COMPACT SCHEMES 269

FIG. 3. Contours of the error reduction factor of the point Gauss–Seidel iteration matrix (ρPG S (θ1 , θ2 ))with δ = γ = 10.

Let the algebraic system Au = f to be solved on grid G be denoted in stencil notation by(see [18]) ∑

k

al,jul,j = fl. (12)

LetA be split asA = M −N,M andN are nonsingular. Basic iterative methods are usually therelaxation type methods and can be represented as

Muk+1 = Nuk + f.

The error iteration matrix is obviouslyM−1N . The error after k+1 iterations is ek+1 = u−uk+1(u is the exact solution) and can be calculated from

ek+1 = M−1Nek.

If the coefficients of the partial differential equation are constant and if the boundary conditionsare periodic, the stencils [A], [M ], and [N ] do not depend on the first argument l, i.e., the stencilis the same for each grid point away from the boundary. Hence, we will drop the first argumentl in our notations.

We expand the error in a Fourier series (of eigenfunctions of M−1N ) as follows:

ek(x, y) =∑

θ=(θ1,θ2)

εkθei(θ1x+θ2y), (13)

Page 8: On convergence and performance of iterative methods with fourth-order compact schemes

270 ZHANG

FIG. 4. Contours of the error reduction factor of the point Gauss–Seidel iteration matrix (ρPG S (θ1 , θ2 ))with δ = γ = 104 .

where −π ≤ θ1, θ2 ≤ π and i =√−1. εkθ is the amplitude of the Fourier mode θ. The

corresponding eigenvalues of the iteration matrix M−1N are

λ(θ) =

∑θ=(θ1,θ2) ε

k+1θ ei(θ1x+θ2y)∑

θ=(θ1,θ2) εkθei(θ1x+θ2y)

=

∑j N(j)ei(θ1x+θ2y)∑jM(j)ei(θ1x+θ2y) . (14)

For the problem considered in this article, M and N are periodic Toeplitz matrices. The error (θFourier component) reduction factor ρ(θ) (which is only a function of θ and independent of k) isthe absolute value of λ(θ) and can then be calculated from

ρ(θ) =|∑j∈N(j) aje

i(θ1x+θ2y)||∑j∈M(j) aje

i(θ1x+θ2y)| . (15)

Typically, when the equations are stable with a modest dissipation, the periodic boundary analysisgives accurate convergence predictions for problems with Dirichlet boundary conditions. For theproblems where the equations are slightly unstable, e.g., CDS with cell Reynolds numbers largerthan 1, this analysis may be inaccurate. On the other hand, Chan and Elman [19] suggested thatthe influence of the Dirichlet boundary conditions be treated heuristically as follows. Since theerror at the boundary is always zero, we, therefore, exclude from our consideration the Fouriermodes with θ1 = 0 and/or θ2 = 0. Hence, we denote the domain of the relevant Fourier modes

Page 9: On convergence and performance of iterative methods with fourth-order compact schemes

FOURTH-ORDER COMPACT SCHEMES 271

FIG. 5. Contours of the error reduction factor of the line Gauss–Seidel iteration matrix (ρLG Sx(θ1 , θ2 ))with δ = γ = 10−2 .

as

Θ = (θ1, θ2)| − π ≤ θ1, θ2 ≤ π, θ1 6= 0 and/or θ2 6= 0.

B. Fourier Analysis of 2D Problem

The matrixA has a block tridiagonal formA = tri[Aj,j−1, Aj,j , Aj,j+1], where the diagonal ma-trices are the same Aj,j = tri[α3, α0, α1], the subdiagonal matrices are Aj,j−1 = tri[α7, α4, α8],and the superdiagonal matrices are Aj,j+1 = tri[α6, α2, α5].

We now consider Fourier analysis for the iteration matrices of the point Gauss–Seidel, lineGauss–Seidel, and alternating line Gauss–Seidel methods. Similar results may be obtained forother iterative methods, see [19, 20].

For the point Gauss–Seidel (PGS) method, the coefficient matrix is split as A = D − L− U ,where D is the diagonal matrix D = diag[α0, . . . , α0] and−L is the strictly lower triangular and−U is the strictly upper triangular parts of A. We have M = D − L and N = U , so that thereduction factor of the Fourier mode θ ∈ Θ with PGS [also see the stencil in (11)] is

ρPGS(θ1, θ2) =|α1e

iθ1 + α2eiθ2 + α5e

i(θ1+θ2) + α6ei(−θ1+θ2)|

|α0 + α3e−iθ1 + α4e−iθ2 + α7e−i(θ1+θ2) + α8ei(θ1−θ2)| .

It is possible to represent ρPGS(θ1, θ2) analytically as a function of δ, γ, and some trigonometricfunctions of θ1 and θ2, but the closed form is very complicated and gives no obvious indicationas to how the value of ρPGS(θ1, θ2) changes as a function of the variables δ, γ, θ1, and θ2. Hence,

Page 10: On convergence and performance of iterative methods with fourth-order compact schemes

272 ZHANG

FIG. 6. Contours of the error reduction factor of the line Gauss–Seidel iteration matrix (ρLG Sx(θ1 , θ2 ))with δ = γ = 10.

following the examples of Kettler [17], we plot the contours of ρPGS(θ1, θ2) as a function of θ1and θ2 for selected values of δ and γ.

For illustration, we choose h = 1/100 and three values for δ = γ (the convection angle isaround 45). Our first choice is δ = γ = 10−2, which represents the diffusion-dominated case.The contours are plotted in Fig. 2. The second choice is δ = γ = 10, which represents themoderate convection case, and the contours for this case are plotted in Fig. 3. The third choiceis δ = γ = 104, which represents the convection-dominated case, and the contours are plotted inFig. 4.

It is clear from Figs. 2, 3, and 4 that PGS with FOCS converges for all values of δ and γ tested,but it is not a good iterative method, because ρPGS(θ1, θ2)→ 1 as (θ1, θ2)→ (0, 0).

For the line Gauss–Seidel (LGS) method along the x-axis (the grid points with the same y-index are solved simultaneously), the coefficient matrix is also split asA = D−L−U , whereDis the block diagonal matrixD = diag[A1,1, A2,2, . . . , An,n]. Now−L is the strictly lower blocktriangular matrix of the form −L = tri[Aj,j−1, 0, 0]. −U is the strictly upper block triangularmatrix −U = tri[0, 0, Aj,j+1]. The splitting matrices M = D − L and N = U have the sameforms, but obviously different meaning from those of PGS, so the reduction factor of the Fouriermode (θ1, θ2) ∈ Θ with LGSx is

ρLGSx(θ1, θ2) =|α2e

iθ2 + α5ei(θ1+θ2) + α6e

i(−θ1+θ2)||α0 + α1eiθ1 + α3e−iθ1 + α4e−iθ2 + α7e−i(θ1+θ2) + α8ei(θ1−θ2)| .

The contours of ρLGSx(θ1, θ2) with δ = γ = 10−2 are depicted in Fig. 5; those with δ = γ =10 are depicted in Fig. 6; and those with δ = γ = 104 are depicted in Fig. 7. Again, LGS

Page 11: On convergence and performance of iterative methods with fourth-order compact schemes

FOURTH-ORDER COMPACT SCHEMES 273

FIG. 7. Contours of the error reduction factor of the line Gauss–Seidel iteration matrix (ρLG Sx(θ1 , θ2 ))with δ = γ = 104 .

with FOCS converges for all values of δ and γ, but it is not a good iterative method, becauseρLGSx(θ1, θ2)→ 1 as (θ1, θ2)→ (0, 0). As the line relaxation solves grid points with the samey-index simultaneously, it is clear that the Fourier mode component θ1 is reduced more than theFourier mode component θ2. This observation suggests that one additional line Gauss–Seidelrelaxation along the y-axis may reduce the Fourier mode component θ2 significantly.

The alternating line Gauss–Seidel (ALGS) method consists of one line Gauss–Seidel relaxationalong the x-axis, followed by one line Gauss–Seidel relaxation along the y-axis. The iterationmatrix can be defined similarly as before. The reduction factor of the Fourier mode (θ1, θ2) ∈ Θwith ALGS is the product of those of the Fourier mode with LGSx and LGSy . Hence,

ρALGS(θ1, θ2) = ρLGSx(θ1, θ2) · ρLGSy (θ1, θ2)

=|α2e

iθ2 + α5ei(θ1+θ2) + α6e

i(−θ1+θ2)||α0 + α1eiθ1 + α3e−iθ1 + α4e−iθ2 + α7e−i(θ1+θ2) + α8ei(θ1−θ2)|

× |α1eiθ1 + α5e

i(θ1+θ2) + α8ei(θ1−θ2)|

|α0 + α2eiθ2 + α3e−iθ1 + α4e−iθ2 + α6ei(−θ1+θ2) + α7e−i(θ1+θ2)| .

The contours of ρALGS(θ1, θ2) with δ = γ = 10−2 are plotted in Fig. 8; those with δ = γ = 10are plotted in Fig. 9; and those with δ = γ = 104 are plotted in Fig. 10. Again, ALGS with FOCSconverges for all values of δ and γ, but it is not a good iterative method since ρALGS(θ1, θ2)→ 1as (θ1, θ2) → (0, 0). We note that one additional line Gauss–Seidel relaxation after LGSx doesreduce most Fourier modes significantly.

Page 12: On convergence and performance of iterative methods with fourth-order compact schemes

274 ZHANG

FIG. 8. Contours of the error reduction factor of the alternating line Gauss–Seidel iteration matrix(ρA LG S (θ1 , θ2 )) with δ = γ = 10−2 .

Our observation is that, for large values of δ and γ, the Fourier modes along the 45 anglebetween the x-axis and the y-axis are least reduced, compared with the Fourier modes along otherdirections. This phenomenon is clear in Figs. 3, 4, 6, 7, 9, and 10.

C. Smoothers for Multilevel Methods

We have shown that none of the PGS, LGS, and ALGS with FOCS is a good iterative method,because all error reduction factors approach 1 as (θ1, θ2) → (0, 0). However, we find thatthe large error reduction factors occur in the region of the Fourier components that have a lowfrequency relative to the mesh size h, i.e., the region where both |θ1| and |θ2| are small, but theerror reduction factors corresponding to large absolute values of |θ1| and |θ2| are not always large.This observation gives us a hint that more advanced iterative methods utilizing the high-frequencycomponent reduction effect of the basic relaxation methods, such as the multilevel methods [21]may be efficient.

It has long been observed that basic iterative methods are efficient only for the first fewiterations when the high-frequency errors (relative to the mesh size) are removed. The iterationprocess is slowed down when the errors are dominated by low-frequency components. Onestrategy to remove the low-frequency errors is to project them on a coarser grid with larger meshsize on which the smooth errors (dominated by the low frequencies) become more oscillatoryand, thus, are more subject to basic iterative methods. This is the basic idea of the two-levelmethods. Obviously, the above process of using coarser grid to remove the low-frequency errorscan be recursively implemented as the multilevel methods [21]. Therefore, the error (Fourier

Page 13: On convergence and performance of iterative methods with fourth-order compact schemes

FOURTH-ORDER COMPACT SCHEMES 275

FIG. 9. Contours of the error reduction factor of the alternating line Gauss–Seidel iteration matrix(ρA LG S (θ1 , θ2 )) with δ = γ = 10.

mode) reduction factor of interest in the multilevel methods is the smoothing factor

ρ = supθ∈ΘH

ρ(θ),

where

ΘH =

(θ1, θ2)| − π ≤ θ1, θ2 ≤ π, |θ1| ≥ π

2or |θ2| ≥ π

2

,

i.e., ΘH is the region of Fourier components that have a high frequency relative to the meshsize h. The smoothing factor ρ tells how well a basic relaxation method (smoother) damps thehigh-frequency components, because the low-frequency components are left to be removed onthe coarser grids. A good smoother with the standard (lower-order) discretization schemes forthe diffusion-dominated problems usually has a small smoothing factor, say, ρ ≤ 0.5. For thehigh-accuracy discretization schemes such as FOCS, we may be content with ρ ≤ 0.7 for theconvection-dominated problems.

We now reanalyze the contours of each basic iterative method. By observing Figs. 2, 5, and 8,we note that all methods are good smoothers when the problems are diffusion-dominated (smallδ and γ). Since PGS is cheaper than both LGS and ALGS, it may be the most cost-effectivesmoother for small δ and γ.

For medium δ and γ, Figs. 3 and 6 show that PGS, LGS are marginally acceptable smoothers,because their smoothing factors are smaller than 0.7. Figures 4 and 7 show that PGS and LGSare not good smoothers for large δ and γ. However, Figs. 9 and 10 show that ALGS is a good

Page 14: On convergence and performance of iterative methods with fourth-order compact schemes

276 ZHANG

FIG. 10. Contours of the error reduction factor of the alternating line Gauss–Seidel iteration matrix(ρA LG S (θ1 , θ2 )) with δ = γ = 104 .

smoother for all values of δ and γ. The shape of the contours also implies that there may be adeterioration of convergence along the 45 angle between the x-axis and the y-axis.

IV. NUMERICAL EXPERIMENTS

We choose a well-known test problem of Wesseling [18] defined on the unit square (0, 1)× (0, 1)withp(x, y) = −Re cosα and q(x, y) = −Re sinα in Eq. (9). α is the so-called flow angle, whichmeasures the relative angle formed between the grid line (x-axis) and the flow characteristics.Re is a positive integer reflecting the ratio of the convection to diffusion (the Reynolds number).Boundary conditions and the right-hand side function f(x, y) are chosen to satisfy the exactsolution u(x, y) = xy(1 − x)(1 − y) exp(x + y). We use h = 1/64 and test PGS, LGS, andALGS as basic (single-level) iterative methods and as smoothers in the multilevel methods. Forthe multilevel methods, we have 6 levels of grids, one pre-smoothing and one post-smoothingsweeps are carried out on each level. Inter-grid transfer operators are the standard full-weightingand bilinear interpolation [21]. Initial guess is u(x, y) = 0 and the computations are terminatedwhen the residual in discrete L2-norm is reduced by a factor of 1010. We remark that LGS andALGS are more expensive than PGS, and ALGS is roughly twice as expensive as LGS. Eachmultilevel iteration is roughly equal to 5 basic iterations.

Table I lists the number of iterations of the single-level iterative methods as we vary the valuesof Re and α. Table II lists similar results for the multilevel iterative methods.

Page 15: On convergence and performance of iterative methods with fourth-order compact schemes

FOURTH-ORDER COMPACT SCHEMES 277

TABLE I. Number of iterations of the single-level iterative methods as a function of the Reynolds number(Re) and the flow angle (α).

Single-level point Gauss–Seidel

Re\α 0 15 30 45 60 75 90

1 7768 7770 7772 7772 7772 7770 7770103 240 228 226 240 258 264 274106 9390 9990 12092 14822 12048 9982 9392

Single-level line Gauss–Seidel

Re\α 0 15 30 45 60 75 90

1 4660 4664 4664 4666 4666 4666 4666103 20 48 102 186 240 278 292106 6 680 3042 7472 9066 9308 9376

Single-level alternating line Gauss–Seidel

Re\α 0 15 30 45 60 75 90

1 2338 2340 2340 2340 2340 2340 2340103 18 38 70 80 54 22 20106 6 638 2292 3760 2296 640 6

It is clear from Table I that PGS, LGS, and ALGS are not efficient as basic iterative methodsfor small and large Re, but they perform quite satisfactorily for medium Re, and this fact wasnot clear from the Fourier analysis performed above. From Table II, however, we see that PGS,LGS, and ALGS are efficient for small to medium Re when they are used as the smoothers in themultilevel methods. It seems that only ALGS is a suitable smoother for large Re. Furthermore, ifthe cost of implementing each method is taken into consideration, the performance of the standardmultilevel methods is not much better than those of the basic iterative methods for large Re cases.

TABLE II. Number of iterations of the multilevel iterative methods as a function of the Reynolds number(Re) and the flow angle (α).

Multilevel point Gauss–Seidel

Re\α 0 15 30 45 60 75 90

1 12 12 12 12 12 12 12103 62 57 58 60 61 62 66106 3256 1345 1561 1960 1592 1392 3256

Multilevel line Gauss–Seidel

Re\α 0 15 30 45 60 75 90

1 12 12 12 12 12 12 12103 16 38 77 110 138 157 164106 10 146 492 1141 1237 1294 3249

Multilevel alternating line Gauss–Seidel

Re\α 0 15 30 45 60 75 90

1 8 8 8 8 8 8 8103 10 27 36 31 18 8 10106 3 138 386 627 388 139 3

Page 16: On convergence and performance of iterative methods with fourth-order compact schemes

278 ZHANG

TABLE III. Number of iterations and the estimated residual reduction rate of the multilevel alternatingline Gauss–Seidel method with residual scaling.

(Re = 106 )\α 0 15 30 45 60 75 90

Iteration count 2 32 52 62 51 32 2Reduction rate 0.00001 0.48698 0.64223 0.68978 0.63668 0.48698 0.00001

The standard multilevel methods are not efficient solvers for large Re cases and the testingresults in Table II look pessimistic. For the multilevel ALGS, the numerical results seem tocontradict our Fourier analysis results, which predicted that ALGS would be a good smoother forlarge Re. Based on our numerical experience, we remark that the problem is not the efficiency ofthe smoother, but the inter-grid transfer operators employed in the standard multilevel methods.We found that if we scale the residual by a scalar factor before it is transferred to the coarse level,the convergence of the multilevel methods can be dramatically accelerated. Table III tabulates thenumber of iterations and the estimated residual reduction rate for the multilevel ALGS with thissimple acceleration technique with a residual scaling factor about 4.9 for Re = 106. By comparingthe results of Table III with those of the last row of Table II, we note that the convergence of themultilevel ALGS is significantly improved and the worst convergence rate is smaller than 0.69.Since high-accuracy solution for the convection-dominated problems is still an open area andsince the topic of this article is not to actually design the multilevel methods, we refer readers to[3, 14] for some acceleration techniques. Detailed treatments of residual scaling techniques arediscussed in [22, 23].

V. CONCLUDING REMARKS

We have analyzed the fourth-order compact schemes for the one- and two-dimensional convection–diffusion equations. For the 1D problem, we have shown that the coefficient matrix can be sym-metrized for all cell Reynolds numbers with a diagonal similarity transformation. We gave thespectral radius of the point Jacobi iteration matrix and we showed that the point Jacobi methodwith FOCS converges for all cell Reynolds numbers.

For the 1D problem, we conducted Fourier analysis to show that the point, line, and alternatingline Gauss–Seidel methods converge for small to large Reynolds numbers, but the overall con-vergence is not fast due to the poor reduction effect of the low-frequency components. However,we showed that PGS and LGS may be good smoothers for small Reynolds number problems andALGS may be a robust smoother for all Reynolds number problems for the multilevel methods.Numerical experiments were employed to verify the results of the Fourier analysis.

Our results showed that FOCS with basic iterative methods may be good smoothers for themultilevel methods. However, our Fourier analysis results also revealed that a good smootheris not enough for efficient multilevel methods. For the convection-dominated problems, othercomponent operators of the multilevel methods need also be adjusted. We showed that a simpleresidual scaling acceleration technique may accelerate the convergence of the multilevel methodsdramatically.

To the best of the author's knowledge, there has been no true mesh-size independent multilevelmethod for the convection-dominated problems. The employment of high accuracy discretizationschemes such as FOCS can at least alleviate the requirement on fine discretization and, thus, mayyield faster solution methods. The convergence and performance analysis of this article provides

Page 17: On convergence and performance of iterative methods with fourth-order compact schemes

FOURTH-ORDER COMPACT SCHEMES 279

useful information for designing efficient iterative methods for high-accuracy numerical solutionof the convection-dominated problems.

References

1. K. W. Morton, Numerical Solution of Convection–Diffusion Problems, Chapman & Hall, London, 1996.

2. A. Brandt and I. Yavneh, ‘‘Inadequency of first-order upwind difference schemes for some recirculatingflows,’’ J. Comput. Phys. 93, 128–143 (1991).

3. J. Zhang, ‘‘Accelerated high-accuracy multigrid solution of the convection–diffusion equation withhigh Reynolds number,’’ Numer. Methods Partial Different. Eqs. 13, 77–92 (1997).

4. S. C. R. Dennis and J. D. Hudson, ‘‘Compact h4 finite-difference approximations to operators ofNavier–Stokes type,’’ J. Comput. Phys. 85, 390–416 (1989).

5. M. M. Gupta, R. P. Manohar, and J. W. Stephenson, ‘‘A fourth-order, cost effective and stable finitedifference scheme for the convection–diffusion equation,’’ in Numerical Properties & Methodologiesin Heat Transfer, Hemisphere Publishing, Washington, D. C., 1983, pp. 201–209.

6. M. M. Gupta, R. P. Manohar, and J. W. Stephenson, ‘‘A single cell high-order scheme for the convection–diffusion equation with variable coefficients,’’ Int. J. Numer. Methods Fluids 4, 641–651 (1984).

7. M. Li, T. Tang, and B. Fornberg, ‘‘A compact fourth-order finite difference scheme for the steadyincompressible Navier–Stokes equations,’’ Int. J. Numer. Methods Fluids 20, 1137–1151 (1995).

8. W. F. Spotz and G. F. Carey, ‘‘High-order compact scheme for the steady stream-function vorticityequations,’’ Int. J. Numer. Methods Eng. 38, 3497–3512 (1995).

9. J. Zhang, ‘‘On convergence of iterative methods for a fourth-order discretization scheme,’’ Appl. Math.Lett. 10, 49–55 (1997).

10. W. F. Spotz, High-Order Compact Finite Difference Schemes for Computational Mechanics, Ph. D.thesis, the University of Texas at Austin, 1995.

11. L. A. Hageman and D. M. Young, Applied Iterative Methods, Academic Press, New York, 1981.

12. H. C. Elman and G. H. Golub, ‘‘Iterative methods for cyclically-reduced non-self-adjoint linear sys-tems,’’ Math. Comp. 54, 671–700 (1990).

13. A. Segal, ‘‘Aspects of numerical methods for elliptic singular perturbation problems,’’ SIAM J. Sci.Stat. Comput. 3, 327–349 (1982).

14. J. Zhang, ‘‘Multigrid solution of the convection–diffusion equation with large Reynolds number,’’ inPreliminary Proceedings of 1996 Copper Mountain Conference on Iterative Methods, Copper Mountain,Colorado, 1996, 2, pp. 1–9.

15. S. Schaffer, ‘‘High-order multi-grid methods,’’ Math. Comp. 43, 89–115 (1984).

16. M. M. Gupta, J. Kouatchou, and J. Zhang, ‘‘Comparison of second and fourth order discretizations formultigrid Poisson solvers,’’ J. Comput. Phys. 132, 226–232 (1997).

17. R. Kettler, ‘‘Analysis and comparison of relaxation schemes in robust multigrid and preconditionedconjugate gradient methods,’’ in Multigrid Methods, W. Hackbusch and U. Trottenberg, Eds., LectureNotes in Math., No. 960, Springer–Verlag, Berlin, 1982, pp. 502–534.

18. P. Wesseling, ‘‘A survey of Fourier smoothing analysis results,’’ in Multigrid Methods III, W. Hack-busch and U. Trottenberg, Eds., Birkhauser Verlag, Basel, 1991, pp. 105–127.

19. T. F. Chan and H. C. Elman, ‘‘Fourier analysis of iterative methods for elliptic problems,’’ SIAM Rev.31, 20–49 (1989).

Page 18: On convergence and performance of iterative methods with fourth-order compact schemes

280 ZHANG

20. K. Stuben and U. Trottenberg, ‘‘Multigrid methods: fundamental algorithms, model problem analysisand applications,’’ in Multigrid Methods, W. Hackbusch and U. Trottenberg, Eds., Lecture Notes inMath., No. 960, Springer–Verlag, Berlin, 1982, pp. 1–176.

21. P. Wesseling, An Introduction to Multigrid Methods, John Wiley & Sons, Chichester, 1992.

22. J. Zhang, ‘‘Residual scaling techniques in multigrid, I: equivalence proof,’’ Appl. Math. Comput., 86,283–303 (1997).

23. J. Zhang, ‘‘Residual scaling techniques in multigrid, II: practical applications,’’ Appl. Math. Comput.,to appear.