24
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/280319771 Improved PHSS iterative methods for solving saddle point problems Article in Numerical Algorithms · June 2015 DOI: 10.1007/s11075-015-0022-6 CITATIONS 9 READS 143 1 author: Some of the authors of this publication are also working on these related projects: Spectral Element Modeling of Sediment Transport in Shear Flows View project Numerical Heat Transfer View project Don Liu Louisiana Tech University 69 PUBLICATIONS 504 CITATIONS SEE PROFILE All content following this page was uploaded by Don Liu on 06 August 2015. The user has requested enhancement of the downloaded file.

 · Numer Algor DOI 10.1007/s11075-015-0022-6 ORIGINAL PAPER Improved PHSS iterative methods for solving saddle point problems Ke Wang1 ·Jingjing Di1 ·Don Liu2 Received: 23 January

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

  • See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/280319771

    Improved PHSS iterative methods for solving saddle point problems

    Article  in  Numerical Algorithms · June 2015

    DOI: 10.1007/s11075-015-0022-6

    CITATIONS

    9READS

    143

    1 author:

    Some of the authors of this publication are also working on these related projects:

    Spectral Element Modeling of Sediment Transport in Shear Flows View project

    Numerical Heat Transfer View project

    Don Liu

    Louisiana Tech University

    69 PUBLICATIONS   504 CITATIONS   

    SEE PROFILE

    All content following this page was uploaded by Don Liu on 06 August 2015.

    The user has requested enhancement of the downloaded file.

    https://www.researchgate.net/publication/280319771_Improved_PHSS_iterative_methods_for_solving_saddle_point_problems?enrichId=rgreq-56030dd131661a3ca27d3b0d945b08f8-XXX&enrichSource=Y292ZXJQYWdlOzI4MDMxOTc3MTtBUzoyNTkzMDU4MjM3OTcyNDhAMTQzODgzNDczNDA1Mw%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/280319771_Improved_PHSS_iterative_methods_for_solving_saddle_point_problems?enrichId=rgreq-56030dd131661a3ca27d3b0d945b08f8-XXX&enrichSource=Y292ZXJQYWdlOzI4MDMxOTc3MTtBUzoyNTkzMDU4MjM3OTcyNDhAMTQzODgzNDczNDA1Mw%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Spectral-Element-Modeling-of-Sediment-Transport-in-Shear-Flows?enrichId=rgreq-56030dd131661a3ca27d3b0d945b08f8-XXX&enrichSource=Y292ZXJQYWdlOzI4MDMxOTc3MTtBUzoyNTkzMDU4MjM3OTcyNDhAMTQzODgzNDczNDA1Mw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Numerical-Heat-Transfer?enrichId=rgreq-56030dd131661a3ca27d3b0d945b08f8-XXX&enrichSource=Y292ZXJQYWdlOzI4MDMxOTc3MTtBUzoyNTkzMDU4MjM3OTcyNDhAMTQzODgzNDczNDA1Mw%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-56030dd131661a3ca27d3b0d945b08f8-XXX&enrichSource=Y292ZXJQYWdlOzI4MDMxOTc3MTtBUzoyNTkzMDU4MjM3OTcyNDhAMTQzODgzNDczNDA1Mw%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Don-Liu?enrichId=rgreq-56030dd131661a3ca27d3b0d945b08f8-XXX&enrichSource=Y292ZXJQYWdlOzI4MDMxOTc3MTtBUzoyNTkzMDU4MjM3OTcyNDhAMTQzODgzNDczNDA1Mw%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Don-Liu?enrichId=rgreq-56030dd131661a3ca27d3b0d945b08f8-XXX&enrichSource=Y292ZXJQYWdlOzI4MDMxOTc3MTtBUzoyNTkzMDU4MjM3OTcyNDhAMTQzODgzNDczNDA1Mw%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Louisiana_Tech_University?enrichId=rgreq-56030dd131661a3ca27d3b0d945b08f8-XXX&enrichSource=Y292ZXJQYWdlOzI4MDMxOTc3MTtBUzoyNTkzMDU4MjM3OTcyNDhAMTQzODgzNDczNDA1Mw%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Don-Liu?enrichId=rgreq-56030dd131661a3ca27d3b0d945b08f8-XXX&enrichSource=Y292ZXJQYWdlOzI4MDMxOTc3MTtBUzoyNTkzMDU4MjM3OTcyNDhAMTQzODgzNDczNDA1Mw%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Don-Liu?enrichId=rgreq-56030dd131661a3ca27d3b0d945b08f8-XXX&enrichSource=Y292ZXJQYWdlOzI4MDMxOTc3MTtBUzoyNTkzMDU4MjM3OTcyNDhAMTQzODgzNDczNDA1Mw%3D%3D&el=1_x_10&_esc=publicationCoverPdf

  • 1 23

    Numerical Algorithms ISSN 1017-1398 Numer AlgorDOI 10.1007/s11075-015-0022-6

    Improved PHSS iterative methods forsolving saddle point problems

    Ke Wang, Jingjing Di & Don Liu

  • 1 23

    Your article is protected by copyright and all

    rights are held exclusively by Springer Science

    +Business Media New York. This e-offprint is

    for personal use only and shall not be self-

    archived in electronic repositories. If you wish

    to self-archive your article, please use the

    accepted manuscript version for posting on

    your own website. You may further deposit

    the accepted manuscript version in any

    repository, provided it is only made publicly

    available 12 months after official publication

    or later and provided acknowledgement is

    given to the original source of publication

    and a link is inserted to the published article

    on Springer's website. The link must be

    accompanied by the following text: "The final

    publication is available at link.springer.com”.

  • Numer AlgorDOI 10.1007/s11075-015-0022-6

    ORIGINAL PAPER

    Improved PHSS iterative methods for solving saddlepoint problems

    Ke Wang1 ·Jingjing Di1 ·Don Liu2

    Received: 23 January 2015 / Accepted: 26 June 2015© Springer Science+Business Media New York 2015

    Abstract An improvement on a generalized preconditioned Hermitian and skew-Hermitian splitting method (GPHSS), originally presented by Pan and Wang(J. Numer. Methods Comput. Appl. 32, 174–182, 2011) for saddle point prob-lems, is proposed in this paper and referred to as IGPHSS for simplicity. Afteradding a matrix to the coefficient matrix on two sides of first equation ofthe GPHSS iterative scheme, both the number of required iterations for conver-gence and the computational time are significantly decreased. The convergenceanalysis is provided here. As saddle point problems are indefinite systems, theConjugate Gradient method is unsuitable for them. The IGPHSS is comparedwith Gauss-Seidel, which requires partial pivoting due to some zero diagonalentries, Uzawa and GPHSS methods. The numerical experiments show that theIGPHSS method is better than the original GPHSS and the other two relevantmethods.

    Keywords Saddle point problem · Gauss-Seidel method · Uzawa method · PHSSmethod · Preconditioning

    � Don [email protected]

    1 Department of Mathematics, College of Sciences, Shanghai University, Shanghai 200444,People’s Republic of China

    2 Mathematics & Statistics and Mechanical Engineering, Louisiana Tech University, Ruston, LA71272, USA

    Author's personal copy

    http://crossmark.crossref.org/dialog/?doi=10.1186/10.1007/s11075-015-0022-6-x&domain=pdfmailto:[email protected]

  • Numer Algor

    1 Introduction

    The linear system of equations

    Ãx̃ = b̃, (1)where à is a nonsingular matrix, x̃ is the unknown state vector, and b̃ is a knownload vector, appears in many different applications of scientific computing, such ascomputational fluid dynamics [24], constraints of optimization problems [30], lin-ear elastic problems [8], electromagnetic problems [23], image recognition problems[15], and least square problem [18]. In many cases, the system (1) is presented as thesymmetric augmented system(

    A B

    BT 0

    ) (x

    y

    )=

    (f

    g

    ), (2)

    where A ∈ Rm×m is a symmetric positive definite matrix, B ∈ Rm×n (m > n) isof full column rank, f ∈ Rm and g ∈ Rn. In an incompressible steady-state viscousflow, the governing equations of the fluid motion are the steady-state Stokes equationand the divergence-free condition, subject to the boundary conditions:⎧⎨

    ⎩∇p = μ∇2u + f, in �,

    ∇ · u = 0, in �,u = u0, on ∂�,

    (3)

    where μ is the dynamic viscosity of the fluid, � ⊂ Rd(d = 2, 3) is a bounded, con-nected domain with a piecewise smooth boundary ∂�. Appropriate discretization ofthe Stokes problem (3) leads to a symmetric saddle point problem of the form (2)where A is a block diagonal matrix, and each of its d diagonal blocks is a discretiza-tion of the Laplace operator with the appropriate boundary conditions. Thus, A canbe symmetric and positive definite. The linear systems for the Stokes problem can beinterpreted as the first order optimality conditions for the minimization problem [14]

    min J (u) = 12

    ∫∫�

    ‖∇u‖22 dS −∫∫

    f · u dS, ∇ · u = 0, (4)

    where ‖u‖2 = √u · u is the Euclidean norm of the vector bfu and dS denotes theelemental area. For the linear system (2), any solution vector (xT, yT)T to (2) is asaddle point for the Lagrangian

    L(x, y) = 12xTAx − f Tx + (Bx − g)Ty, (5)

    where y is the vector of Lagrangian multipliers. This is the reason that the linearsystem (2) is called “saddle point problem”. Details can be found in the review paperby Benzi, Golub and Liesen [7].

    Direct methods can be effective [13, 25] for the augmented system (2) arising fromthe numerical solution of partial differential equations (PDEs) in two-dimensionalproblems. However, because of the large storage requirement and the computationalintensity, direct solvers are not always used in large three-dimensional problems.Alternatively, iterative methods are more popular for large sparse systems.

    Author's personal copy

  • Numer Algor

    Among iterative methods, stationary iterations have been popular for years asstand-alone solvers, but nowadays they are most often used as preconditioners forKrylov subspace methods (equivalently, the convergence of these stationary itera-tions can be accelerated by Krylov subspace methods.), such as Uzawa method [9,10, 32], which was proposed by Uzawa in 1958 and was popular for solving Stokesflows [7] in fluid dynamics. Uzawa method will be discussed later in the numericalexperiment section.

    Other iterative methods are SOR-like [12], GSOR [6], GSSOR [33], GAOR [27]and their promotion algorithms [19–21, 26, 28]. These methods utilized classicaliterative ideas, such as Jacobi, Gauss-Seidel, SOR and AOR methods. Because theyneed all the diagonal entries of the coefficient matrix are nonzero, the classical iter-ative methods can, obviously, not be directly applied to the augmented system (2).Conjugate Gradient (CG) method is efficient for symmetric positive definite systems,however, it has been proved that the saddle point problem (2) is indefinite with mpositive and n negative eigenvalues, see [7]; therefore, CG method is not suitablefor solving (2). Krylov subspace methods are most popular in recent years and canbe used as the inner iteration processes at each step of the outer iteration of manypreconditioned methods [7], such as HSS [3, 4], HSS-like [1], PHSS [5], AHSS [2]and GLHSS [11] and new preconditioners [16]. The PHSS method presented by Bai,Golub and Pan [5] is very popular, because they introduce a preconditioner accordingto the special structure of the augmented system (2), which improve the conver-gence of the PHSS method. With this idea, many authors suggested various iterativemethods for (2).

    Pan and Wang [22] considered the generalized preconditioned Hermitian andskew-Hermitian splitting (GPHSS) method by introducing two relaxation parametersω and τ instead of one parameter α in PHSS method, which further improves the con-vergence, and the GPHSS leads to the PHSS when the two parameters are equal. Inthis paper, the GPHSS method was reviewed and an improvement was made, whichsignificantly accelerate the speed of solution.

    This paper is organized as below. In Section 2, the GPHSS method is brieflyreviewed. In Section 3, the improvement algorithm (IGPHSS) is presented and theconvergence analysis is provided. In Section 4, the choice of relaxation parametersis discussed. In Section 5, numerical examples are given to show the significance ofthe improvement in the IGPHSS. The conclusion is drawn in Section 6.

    2 GPHSS method for augmented system

    First, the original GPHSS method, proposed by Pan and Wang [22] for augmentedsystems is briefly reviewed here. The system (2) can be written [12] in the skew-symmetric form: (

    A B

    −BT 0) (

    x

    y

    )=

    (f

    −g)

    , (6)

    in the matrix-vector form

    Az = b, (7)

    Author's personal copy

  • Numer Algor

    where

    A =(

    A B

    −BT 0)

    , z =(

    x

    y

    ), b =

    (f

    −g)

    .

    Define a preconditioning matrix P as

    P =(

    A 00 Q

    ),

    where, Q ∈ Rn×n and Q is nonsingular and symmetric. Denote H = 12 (A + AT)and S = 12 (A − AT), then the matrix form of the GPHSS algorithm is:⎧⎪⎪⎪⎪⎨

    ⎪⎪⎪⎪⎩

    (�P + H)(

    x(k+ 12 )

    y(k+ 12 )

    )= (�P − S)

    (x(k)

    y(k)

    )+ b,

    (�P + S)(

    x(k+1)y(k+1)

    )= (�P − H)

    (x(k+ 12 )

    y(k+ 12 )

    )+ b,

    (8)

    where

    � =(

    ωIm 00 τIn

    ),

    here, Im and In arem×m and n×n identity matrices, and ω, τ > 0 are two relaxationparameters. The iteration matrix is

    M(ω,τ) = (�P + S)−1(�P − H)(�P + H)−1(�P − S),and the iterative scheme is:⎧⎪⎪⎪⎪⎨

    ⎪⎪⎪⎪⎩

    x(k+ 12 ) = ω1+ωx(k) + 11+ωA−1(f − By(k)),y(k+ 12 ) = y(k) + 1

    τQ−1(BTx(k) − g),

    y(k+1) = τD−1Qy(k+ 12 ) + D−1((1 − 1ω)BTx(k+ 12 ) + 1

    ωBTA−1f − g),

    x(k+1) = ω−1ω

    x(k+ 12 ) + 1ωA−1(f − By(k+1)).

    (9)

    where D = ω−1BTA−1B + τQ ∈ Rn×n. The optimal ω and τ areω∗ = σmin + σmax

    2√

    σminσmax,

    τ∗ = 2σminσmax√

    σminσmax

    σmin + σmax ,where σmin and σmax are the positive smallest and largest singular values of

    A− 12 BQ− 12 . When ω = τ , the GPHSS method (8) becomes the PHSS method [5].

    Remark 1 The nonsingular matrix block Q in the preconditioning matrix P can bechosen as in [12], i.e., the following three cases:

    (I) Q = BTB;(II) Q = BTA−1B;(III) Q = αI .

    In this paper, the first one Q = BTB is chosen in the numerical experiments.

    Author's personal copy

  • Numer Algor

    3 Improved GPHSS method

    In this paper, a significant improvement on the GPHSS method is made by adding amatrix B̃ to the coefficient matrices on both sides of the first equation of (8):

    ⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩

    (�P + H + B̃)(

    x(k+ 12 )

    y(k+ 12 )

    )= (�P − S + B̃)

    (x(k)

    y(k)

    )+ b,

    (�P + S)(

    x(k+1)y(k+1)

    )= (�P − H)

    (x(k+ 12 )

    y(k+ 12 )

    )+ b,

    (10)

    where B̃ =(

    0 0−BT 0

    ). This improved algorithm (10) is denoted as IGPHSS from

    now on.Suppose that Q ∈ Rn×n is symmetric and positive definite, given the initial vec-

    tors x(0) ∈ Rm, y(0) ∈ Rn, as well as the relaxation factor ω > 0, τ > 0, fork = 0, 1, 2, · · · , till the sequence of iterations (x(k)T, y(k)T)T converges, the IGPHSSalgorithm can be described as below:

    ⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩

    x(k+ 12 ) = ω1+ωx(k) + 11+ωA−1(f − By(k)),y(k+ 12 ) = y(k) + 1

    τQ−1(BTx(k+ 12 ) − g),

    y(k+1) = τD−1Qy(k+ 12 ) + D−1((1 − 1ω)BTx(k+ 12 ) + 1

    ωBTA−1f − g),

    x(k+1) = ω−1ω

    x(k+ 12 ) + 1ωA−1(f − By(k+1)).

    (11)

    where D = ω−1BTA−1B + τQ ∈ Rn×n.It is noticed that (11) is similar to (9). The only difference is in the second equation,

    i.e., x(k+ 12 ) is used instead of x(k). However, it is this slight change that improves theconvergence rate significantly, because, by adding the matrix BT, the updated value

    for x could be used in y(k+ 12 ). This will be demonstrated in the following numericalexperiments.

    To analyze the convergence of the IGPHSS, let B = A− 12 BQ− 12 and A =P − 12AP − 12 =

    (I B

    −BT 0

    ).After straightforward computations, the iteration matrix

    of the IGPHSS method is obtained as below:

    M(ω,τ) = (�P +S)−1(�P −H)(�P +H + B̃)−1(�P −S + B̃) = P − 12 M(ω,τ)P 12 ,(12)

    where

    M(ω,τ) = (�I + S)−1(�I − H)(�I + H + B̃)−1(�I − S + B̃),and

    H = 12(A + AT) =

    (I 00 0

    ), S = 1

    2(A − AT) =

    (0 B

    −BT 0

    ), B̃ =

    (0 0

    −BT 0

    ).

    Author's personal copy

  • Numer Algor

    The equation (12) indicates that the iteration matrix M(ω,τ) is similar to M(ω,τ).Therefore,

    ρ(M(ω,τ)) = ρ(M(ω,τ)), (13)where ρ(·) is the spectral radius of a matrix.

    A lemma and the convergence theorem of the IGPHSS method for solving theaugmented system (2) are given below. This lemma will be used in the subsequentproof of the main theorem.

    Lemma 1 [31] Both roots of the real quadratic equation x2 − bx + c = 0 are lessthan one in modulus if and only if |c| < 1 and |b| < 1 + c.

    Theorem 1 Suppose that Q ∈ Rn×n is symmetric and positive definite, and B ∈R

    m×n has full rank. Let σk(k = 1, 2, · · · , n) be the positive singular values of B =A− 12 BQ− 12 . Then the IGPHSS method is convergent for all ω, τ such that

    0 < ω ≤ 1, τ > 0 or ω > 1, τ > σ2max(ω − 1)

    2ω2,

    where σmax = max{σ 1, σ 2, · · · , σ n}.

    Proof Since M(ω,τ) and M(ω,τ) have the same eigenvalues, then the eigenvalues ofM(ω,τ) can be determined instead. It is easy to see that

    �I ± H =(

    (ω ± 1)Im 00 τIn

    ), �I ± S =

    (ωIm ±B∓BT τIn

    )

    and

    �I + S =(

    Im 0

    −ω−1BT In

    ) (ωIm B

    0 S(ω,τ)

    ),

    where S(ω,τ) = τIn + ω−1BTB. Straightforward calculations yield

    M(ω,τ) =(

    M11(ω,τ ) M12(ω,τ )M21(ω,τ ) M22(ω,τ )

    ),

    where

    M11(ω,τ ) = ω−1ω+1I − 2ω−1ω(ω+1)BS−1

    BT, M12(ω,τ ) = 1ω+1B − 3τω+1BS

    −1,

    M21(ω,τ ) = 2ω−1ω+1 S−1

    BT, M22(ω,τ ) = − 2ω−1ω+1 I + 3ωτω+1S

    −1.

    Suppose the Singular Value Decomposition (SVD) of B is U1VT, where U ∈

    Rm×m and V ∈ Rn×n are orthogonal matrices,

    1 =(

    0

    ), = diag(σ 1, σ 2, · · · , σ n) ∈ Rn×n.

    Author's personal copy

  • Numer Algor

    Then, S(ω,τ) = V (τI + ω−12)V T = V DV -T, and

    M11(ω,τ ) = U(

    ω−1ω+1I − 2ω−1ω(ω+1)D−1

    20

    0 ω−1ω+1I

    )U

    T,

    M12(ω,τ ) = U( 1

    ω+1 − 3τω+1D−1,0

    )V

    T,

    M21(ω,τ ) = V(

    2ω−1ω+1 D

    −1, 0

    )U

    T,

    M22(ω,τ ) = V (− 2ω−1ω+1 I + 3ωτω+1D−1

    )VT.

    Let Q = diag(U, V ), by an orthogonal similarity transform QTM(ω,τ)Q, theoriginal matrix M(ω,τ) becomes⎛

    ⎜⎝ω−1ω+1I − 2ω−1ω(ω+1)D−1

    20 1

    ω+1 − 3τω+1D−10 ω−1

    ω+1 02ω−1ω+1 D

    −10 − 2ω−1

    ω+1 I + 3ωτω+1D−1

    ⎞⎟⎠ .

    The matrix M(ω,τ) has m − n repeated eigenvalues ω−1ω+1 ; the non-repeated onescould be obtained from the matrix

    Mk(ω, τ) = 1(ω + 1)(τω + σ 2k)

    ((ω − 1)τω − ωσ 2k σ 3k − 2τωσk

    ω(2ω − 1)σ k (ω + 1)τω − (2ω − 1)σ 2k

    ),

    for k = 1, 2, · · · , n. The characteristic equation of Mk(ω, τ) is(ω + 1)(τω + σ 2k)λ2 − [2τω2 − (3ω − 1)σ 2k]λ + τω(ω − 1) = 0,

    that is

    λ2 − 2τω2 − (3ω − 1)σ 2k

    (ω + 1)(τω + σ 2k)λ + (ω − 1)τω

    (ω + 1)(τω + σ 2k)= 0,

    for ω > 0, τ > 0.By Lemma 1, |λ| < 1 if and only if both of the following inequalities are valid∣∣∣∣∣

    (ω − 1)τω(ω + 1)(τω + σ 2k)

    ∣∣∣∣∣ < 1, (14)and ∣∣∣∣∣

    2τω2 − (3ω − 1)σ 2k(ω + 1)(τω + σ 2k)

    ∣∣∣∣∣ < 1 +(ω − 1)τω

    (ω + 1)(τω + σ 2k). (15)

    By the first inequality (14), there is

    −(ω + 1)(τω + σ 2k) < (ω − 1)τω < (ω + 1)(τω + σ 2k),i.e., 2τω2 + σ 2kω + σ 2k > 0 and 2τω + σ 2kω + σ 2k > 0. These two inequality areobviously true for ω > 0, τ > 0. It follows from the second inequality (15) that

    2τω2 − (3ω − 1)σ 2k < (ω + 1)(τω + σ 2k) + (ω − 1)τω,that is 4σ 2kω > 0. It is obvious for ω > 0. Based on (15), the following is valid

    −(ω + 1)(τω + σ 2k) − (ω − 1)τω < 2τω2 − (3ω − 1)σ 2k,

    Author's personal copy

  • Numer Algor

    i.e.,2τω2 + (1 − ω)σ 2k > 0, (16)

    which holds for0 < ω ≤ 1, τ > 0.

    When ω > 1, τ >σ 2k(ω−1)

    2ω2; hence, for τ > σ

    2max(ω−1)2ω2

    , the inequality (16) also holds.Therefore, for

    0 < ω ≤ 1, τ > 0 or ω > 1, τ > σ2max(ω − 1)

    2ω2,

    |λ| < 1 and the IGPHSS method is convergent. This concludes the proof.

    4 The relaxation parameters

    According to the theory of iterative methods, the optimal relaxation parameters areas follows

    (ω∗, τ∗) = argmin(ω,τ )ρ(M(ω,τ)),where the argmin(ω,τ ) means such ω, τ that the spectral radius of the iteration matrixM(ω,τ) reaches the minimum, cf. [6, 31].

    To get the ω∗ and τ∗, it is necessary to analyze the characteristic equation ofM(ω,τ). Because of (13), the matrix M(ω,τ) is analyzed instead. Based on the proof ofTheorem 1, the matrix M(ω,τ) has m−n repeated eigenvalues ω−1ω+1 and non-repeatedeigenvalues of the matrix

    Mk(ω, τ) = 1(ω + 1)(τω + σ 2k)

    ((ω − 1)τω − ωσ 2k σ 3k − 2τωσk

    ω(2ω − 1)σ k (ω + 1)τω − (2ω − 1)σ 2k

    ),

    which has the characteristic equation

    (ω + 1)(τω + σ 2k)λ2 − [2τω2 − (3ω − 1)σ 2k]λ + τω(ω − 1) = 0,k = 1, 2, · · · , n. Thus, theoretically, all the eigenvalues λ of M(ω,τ) are available,which depends on ω and τ . The optimal parameters could be obtained by findingout the minimum of these eigenvalue functions. However, unfortunately, for mostiterative methods, especially with multiple parameters, the analysis processing isvery complicated. Therefore, it is very difficult to get the optimal parameters. Theparameters in this paper are chosen based on prior experience and trial and error.However, numerical results indicate that, these parameters could be chosen based onthe optimal parameters of the GPHSS method.

    5 Numerical experiments

    In this section, four examples are given to illustrate the accuracy of IGPHSS method.The first two examples come from [17]. The third one is from [22] and the fourth oneis of the steady Stokes flow problem [5, 14]. Results are compared with Uzawa [7]

    Author's personal copy

  • Numer Algor

    and GPHSS [22] methods. For the Uzawa method, with some initial guesses x0 andy0, the iteration scheme for (2) is given as below:{

    Axk+1 = f − Byk,yk+1 = yk + ω(BTxk+1 − g), (17)

    where ω > 0 is a relaxation parameter. And the optimal ω is

    ω∗ = 2λmin + λmax ,

    where λmin and λmax are the smallest and largest eigenvalues of BTA−1B.All computations were completed with MATLAB 7.12 on a single 2.70GHz CPU

    with 4.00GB RAM. In these experiments, Q = BTB, the initial guess is zero vectorand the stopping criterion is∥∥r(k)∥∥2∥∥r(0)∥∥2 < 10

    −6, or k = 5000,

    where r(k) is the residual vector after k iterations. Tables 1 through 4 list detailedresults from IGPHSS method such as the number of iterations required for conver-gence, CPU time, and relative error, in comparison with Gauss-Seidel, Uzawa andGPHSS methods, where ω∗ and τ∗ are the optimal parameters in Uzawa and GPHSSmethods, respectively.

    As mentioned in the Introduction, the classical methods such as Jacobi, Gauss-Seidel, SOR can not be directly applied to this problem due to the zero diagonalblock. Therefore, in the numerical experiments, Gauss-Seidel method is implementedon the pivoted system:

    EAz = Eb, (18)which has the same solution to the saddle point problem (7), and where E is an ele-mentary matrix making pivoting on A. It is noticed that Gauss-Seidel can be appliedto the system (18) obtained by this change, which does not guarantee the conver-gence. This is because the partial pivoting neither guarantee diagonal entries of EAbeing nonzero nor assure diagonally dominance, not to mention that symmetric pos-itive definite which are the sufficient conditions of the convergence of Gauss-Seidelmethod.

    Example 1 Consider (m + n) × (m + n) augmented system (2) with

    A = (aij )m×m ={

    aij = i + j, i = j,aij = − 1m, i = j,

    1 ≤ i, j ≤ m,

    B = (bij )m×n ={

    bij = 1, i = j,bij = 0, i = j, 1 ≤ i ≤ m, 1 ≤ j ≤ n,

    f = (1, 0, · · · , 0)T and g = (1, 0, · · · , 0)T.From [29], it is noticed that A is a positive definite M-matrix. B is obviously a full

    column rank matrix. However, as mentioned in the Introduction, the big augmentedcoefficient matrix A is not positive definite, hence the Conjugate Gradient methodcan not be used for this problem. According to the analysis in the paper, the Uzawa

    Author's personal copy

  • Numer Algor

    Fig. 1 The shape of the pivotedcoefficient matrix EA forExample 1

    0 5 10 15 20 25

    0

    5

    10

    15

    20

    25

    nz = 410

    m = 20, n = 5, nz is the number of nonzero entries

    (17), GPHSS (9) and IGPHSS (11) algorithms can be applied to this problem, respec-tively, and after pivoting Gauss-Seidel method can also be applied although it maynot converge.

    In the numerical experiment, partial pivoting was used to obtain the system (18)and Gauss-Seidel method was used to solve (18) which should yield the same solutionas the original problem. The optimal relaxation parameters are calculated for Uzawaand GPHSS methods and implement these two methods with the optimal parameters.For the IGPHSS method, ω = 0.44, τ = 0.01. By varying m and n, numerical resultsare obtained in Table 1.

    From Table 1, it is shown that Gauss-Seidel and IGPHSS have the same CPU timewhile Uzawa and GPHSS need much more CPU time. Uzawa is almost four timeof GPHSS. GPHSS needs much more time than IGPHSS as m and n become larger.The number of iterations of Uzawa is much larger than the other three methods and issensitive to m and n while the other three is not as sensitive. In this example, IGPHSSand Gauss-Seidel are better than the rest because they need the same computationaltime although Gauss-Seidel has less iterations. Gauss-Seidel used the least iterationsbecause the pivoted coefficient matrix EA is close to a lower triangular matrix, seeFig. 1. It is interesting to see that Gauss-Seidel has very high precision when m = n.This is because EA has the least numbe of rows with more than one entry and EA isa lower triangular matrix in that case, see Fig. 2.

    Table 1 shows that Uzawa method needs much more iterations and CPU time. TheGPHSS method needs more iterations and CPU time than the IGPHSS method eventhough with the optimal parameters. Thus, the GPHSS method is better than Uzawamethod, and the IGPHSS method is better than the GPHSS method. The improve-ment is effective for this problem. However, after pivoting, Gauss-Seidel method isthe best choice for this problem, but in practice, the pivoted matrix EA is not trian-gular matrix, in that case, Gauss-Seidel may not converge very fast and could evendiverge, see Example 4.

    Author's personal copy

  • Numer Algor

    Table1

    Iterations

    (IT),CPU

    time(t)andrelativ

    eerror(ERR)forExample1

    Gauss-Seidel

    Uzawa

    GPH

    SSIG

    PHSS

    mn

    —ω

    ∗ω

    ∗,τ

    =τ ∗

    0.44

    =0.01

    ITt

    ERR

    ITt

    ERR

    ITt

    ERR

    ITt

    ERR

    5050

    20.00

    0350

    0.04

    9.8e-7

    250.01

    8.6e-7

    120.00

    6.3e-7

    100

    403

    0.01

    3.2e-9

    280

    0.11

    9.9e-7

    240.03

    5.5e-7

    120.01

    2.7e-7

    100

    503

    0.02

    1.5e-9

    350

    0.12

    9.8e-7

    250.03

    8.6e-7

    120.02

    3.9e-7

    100

    602

    0.03

    6.7e-7

    420

    0.12

    9.7e-7

    270.03

    5.4e-7

    130.03

    3.8e-7

    128

    642

    0.04

    6.1e-7

    447

    0.25

    1.0e-6

    270.06

    7.7e-7

    130.04

    4.8e-7

    200

    802

    0.05

    3.3e-7

    559

    0.69

    9.8e-7

    290.16

    6.7e-7

    140.05

    6.3e-7

    Author's personal copy

  • Numer Algor

    Fig. 2 The shape of the pivotedcoefficient matrix EA forExample 1

    0 10 20 30 40

    0

    5

    10

    15

    20

    25

    30

    35

    40

    nz = 440

    m = 20, n = 20, nz is the number of nonzero entries

    Example 2 Consider (m + n) × (m + n) augmented system (2) with

    A = (aij )m×m =

    ⎧⎪⎨⎪⎩

    aij = − 12 − i2m, i < j,aij = aji, i > j,aij = − ∑

    k =iaik + 1 + im , i = j,

    1 ≤ i, j ≤ m,

    B = (bij )m×n ={

    bij = 1/2, i = j,bij = 0, i = j, 1 ≤ i ≤ m, 1 ≤ j ≤ n,

    f = (1, 0, · · · , 0)T and g = (1, 0, · · · , 0)T. In this example, the entries of A arevaried as well but B is chosen as the half of that in Example 1. Similar to the Example1, classical methods Jacobi, Gauss-Seidel and SOR can not be directly applied to thisproblem due to the zero diagonal block.

    It is noticed that A is a positive definite M-matrix according to [29]. B is obvi-ously a full column rank matrix. However, as the coefficient matrix A is not positivedefinite, the Conjugate Gradient method can not be used for this problem too. There-fore, methods such as Uzawa (17), GPHSS (9) and IGPHSS (11) algorithms are usedin this problem. After partial pivoting Gauss-Seidel method can be implemented, andthe Uzawa and GPHSS methods are implemented with the optimal parameters. Forthe IGPHSS method, ω = 0.33, τ = 10−11. Under different values of m and n,numerical results are listed in Table 2 and the comparisons are discussed afterwards.

    Similar to the previous case, the Gauss-Seidel method has very high precisionwhen m = n. This is because the matrix EA has the least number of rows withmore than one entry and EA is a lower triangular matrix, see Fig. 3. As n decreases,EA becomes more dense (see Fig. 4), more iterations are needed in the Gauss-Seidelprocess.

    Table 2 shows that at different values of m and n, the IGPHSS and GPHSS aresuperior to Uzawa method, and the IGPHSS is better than the GPHSS. Both the iter-ations and CPU time are sensitive to m and n for Gauss-Seidel and Uzawa methods.

    Author's personal copy

  • Numer Algor

    Table2

    Iterations

    (IT),CPU

    time(t)andrelativ

    eerror(ERR)forExample2

    Gauss-Seidel

    Uzawa

    GPH

    SSIG

    PHSS

    mn

    —ω

    ∗ω

    ∗,τ

    =τ ∗

    ω=

    0.33,τ

    =10

    −11

    ITt

    ERR

    ITt

    ERR

    ITt

    ERR

    ITt

    ERR

    100

    3031

    0.03

    6.6e-7

    108

    0.06

    9.6e-7

    180.02

    5.0e-7

    60.00

    2.2e-7

    100

    100

    20.00

    0338

    0.12

    9.9e-7

    260.04

    6.4e-7

    60.00

    2.2e-7

    128

    6417

    0.03

    6.5e-7

    227

    0.12

    9.7e-7

    220.04

    9.9e-7

    60.01

    2.8e-7

    200

    150

    90.08

    8.7e-7

    527

    0.62

    9.7e-7

    300.22

    4.7e-7

    60.03

    4.4e-7

    256

    128

    180.09

    4.7e-7

    452

    1.15

    9.9e-7

    270.33

    3.7e-7

    60.03

    5.6e-7

    500

    400

    90.47

    1.2e-7

    1422

    14.12

    9.9e-7

    392.76

    5.8e-7

    70.45

    8.2e-9

    Author's personal copy

  • Numer Algor

    Fig. 3 The shape of the pivotedcoefficient matrix EA forExample 2

    0 50 100 150 200

    0

    20

    40

    60

    80

    100

    120

    140

    160

    180

    200

    nz = 10200

    m = 100, n = 100, nz is the number of nonzero entries

    Uzawa needs almost four time the CPU time of GPHSS. GPHSS need much moretime than IGPHSS as m and n are larger.

    Example 3 Consider (m + n) × (m + n) augmented system (2) with

    A = (aij )m×m =⎧⎨⎩

    aij = i + 1, i = j,aij = −1, |i − j | = 1,aij = 0, others,

    1 ≤ i, j ≤ m,

    B = (bij )m×n ={

    bij = j, i = j + m − n,bij = 0, others,

    f = (1, 0, · · · , 0)T and g = (1, 0, · · · , 0)T. Here the matrix A is chosen as a tri-diagonal matrix, different from the situations in Examples 1 and 2.

    Fig. 4 The shape of the pivotedcoefficient matrix EA forExample 2

    0 20 40 60 80 100 120

    0

    20

    40

    60

    80

    100

    120

    nz = 10060

    m = 100, n = 30, nz is the number of nonzero entries

    Author's personal copy

  • Numer Algor

    Fig. 5 The shape of the pivotedcoefficient matrix EA forExample 3

    0 5 10 15 20 25 30

    0

    5

    10

    15

    20

    25

    30

    nz = 73

    m = 15, n = 15, nz is the number of nonzero entries

    Based on [29], A is a positive definite M-matrix. B is a full rank matrix. Since theConjugate Gradient method is not suitable for this problem, the Uzawa (17), GPHSS(9) and IGPHSS (11) algorithms are applied to this problem. Gauss-Seidel methodwas used as well with a partial pivoting. For the Uzawa and GPHSS methods, theoptimal parameters ω∗ and τ∗ were used. For the IGPHSS method, ω = 1, τ =τ∗ + 0.001, where τ∗ is the optimal parameter of the GPHSS method. Numericalresults are compared in Table 3 with different m and n, where τ+ = τ∗ + 0.001.

    As showed in Figs. 5 and 6, when m = n, EA is also a lower triangular matrix,therefore, Gauss-Seidel could produce high precision solution. Since EA is a bandedmatrix with the same bandwidth as before, Gauss-Seidel needs the same number ofiterations for the same precision at different m and n. In this example, IGPHSS issensitive to the parameters, that is, for different m and n, different τ is chosen for fastconvergence.

    Table 3 shows that the IGPHSS and GPHSS methods are far superior to Uzawamethod, and the IGPHSS is better than GPHSS. The Uzawa method is very sensitiveto m and n for both iterations and CPU time in this problem and does not convergesto the specified precision within the maximal iterations k = 5000 for most cases,while the other three methods converge very fast.

    Example 4 Consider the steady Stokes flow problem: Find u and p such that⎧⎨⎩

    μ∇2u − ∇p + f = 0, in �,∇ · u = 0, in �,

    u = u0, on ∂�,(19)

    where � = (0, 1) × (0, 1) ⊂ R2, ∂� is the boundary of �, u ∈ R2 is the velocityvector, u0 is the Dirichlet boundary condition for the fluid velocity, p is a scalar repre-senting the pressure, and f is the fluid body force. Because the pressure is an auxiliary

    Author's personal copy

  • Numer Algor

    Table3

    Iterations

    (IT),CPU

    time(t)andrelativ

    eerror(ERR)forExample3

    Gauss-Seidel

    Uzawa

    GPH

    SSIG

    PHSS

    mn

    —ω

    ∗ω

    ∗,τ

    =τ ∗

    ω=

    1,τ

    =τ +

    ITt

    ERR

    ITt

    ERR

    ITt

    ERR

    ITt

    ERR

    6020

    110.03

    9.7e-7

    2426

    0.33

    9.9e-7

    80.00

    1.8e-7

    80.00

    5.3e-7

    128

    6411

    0.06

    9.7e-7

    5000

    2.50

    0.6

    100.03

    1.2e-7

    100.03

    7.0e-7

    256

    128

    110.12

    9.7e-7

    5000

    13.51

    39.0

    100.12

    2.7e-7

    90.09

    6.9e-7

    512

    256

    110.41

    9.7e-7

    5000

    64.10

    190.5

    100.58

    6.4e-7

    70.30

    3.2e-8

    1024

    512

    111.37

    9.7e-7

    5000

    430.52

    476.4

    114.00

    1.3e-7

    102.96

    3.5e-7

    1500

    1000

    115.94

    9.7e-7

    5000

    1086.70

    487.2

    1313.12

    3.0e-7

    76.51

    3.0e-7

    Author's personal copy

  • Numer Algor

    Table4

    Iterations

    (IT),CPU

    time(t)andrelativ

    eerror(ERR)forExample4

    Gauss-Seidel

    Uzawa

    GPH

    SSIG

    PHSS

    mn

    —ω

    ∗ω

    ∗,τ

    =τ ∗

    ω=

    0.5,

    τ=

    10−4

    ITt

    ERR

    ITt

    ERR

    ITt

    ERR

    ITt

    ERR

    800

    400

    1629

    42.3

    NaN

    593

    0.5

    9.7e-7

    390.6

    7.8e-7

    190.3

    3.3e-7

    1800

    900

    1612

    494.5

    NaN

    1336

    2.8

    1.0e-6

    484.7

    9.4e-7

    191.9

    7.7e-7

    3200

    1600

    1599

    1520.5

    NaN

    2400

    9.5

    9.9e-7

    5625.8

    9.4e-7

    208.6

    4.6e-7

    Author's personal copy

  • Numer Algor

    Fig. 6 The shape of the pivotedcoefficient matrix EA forExample 3

    0 5 10 15 20 25

    0

    5

    10

    15

    20

    25

    nz = 63

    m = 15, n = 10, nz is the number of nonzero entries

    variable and it imposes the incompressible flow condition, i.e., the divergence-freevelocity field here. After using a discretizational method for (19), a system of linearequations (2) could be obtained [5], in which

    A =[

    I ⊗ T + T ⊗ I 00 I ⊗ T + T ⊗ I

    ]∈ R2l2×2l2 ,

    B =[

    I ⊗ FF ⊗ I

    ]∈ R2l2×l2 ,

    and

    T = 1h2

    · tridiag(−1, 2, −1) ∈ Rl×l ,F = 1

    h· tridiag(−1, 4, 0) ∈ Rl×l ,

    with I being the identity matrix, ⊗ the Kronecker product symbol, h = 1l+1 the

    discretization mesh size, and tridiag(a, b, c) the tridiagonal matrix with a, b, c as thesub-diagonal, main diagonal, and super-diagonal entries, respectively.

    In the numerical experiment, the dynamic viscosity was scaled to be μ = 1, andthe right-hand-side vector b was chosen such that the exact solution of the systemof linear equations (2) is (1, 1, · · · , 1)T ∈ R3l2 . Similar as before, classical methodscan not be applied directly.

    It can be shown that A is a positive definite M-matrix and B is a full column rankmatrix, however, the augmented coefficient matrix A is not positive definite, so theConjugate Gradient method can not be used, thus, the Uzawa (17), GPHSS (9) andIGPHSS (11) algorithms are applied to this problem separately. Partial pivoting wasused during the Gauss-Seidel method, and the optimal parameters ω∗ and τ∗ wereused in implementing the Uzawa and GPHSS methods. For the IGPHSS method,ω = 0.5, τ = 0.0001. With three sets of vales of m and n, numerical results are givenin Table 4 with comparisons with Gauss-Seidel, Uzawa and GPHSS methods, wherem = 2l2, n = l2.

    Author's personal copy

  • Numer Algor

    Fig. 7 The shape of the pivotedcoefficient matrix EA forExample 4

    0 200 400 600 800 1000 1200

    0

    200

    400

    600

    800

    1000

    1200

    nz = 6960

    m = 800, n = 400, nz is the number of nonzero entries

    For the Stokes flow problem shown in Fig. 7, the pivoted matrix EA is not a lowertriangular matrix, so it is hard to pivot it to a diagonally dominant matrix. Therefore,for this problem, Gauss-Seidel method does not converges.

    Table 4 shows that, for the Stokes problem, Uzawa method needs less CUP timethan GPHSS method while it needs more iterations. The IGPHSS method needsless iterations and CPU time than the other two methods, and its iteration numbersincrease slowly with m and n increasing.

    6 Conclusion

    In this paper, an improvement are presented on GPHSS method suggested by Panand Wang [22] for solving augmented systems, referred to IGPHSS method. Specif-ically, an improvement is made by adding a matrix to the coefficient matrix of thefirst equation of the GPHSS iterative scheme at both sides, which decreases the iter-ations and the CUP time. Numerical experiments show that the improved methodis better than Uzawa and GPHSS methods even though they are implemented withthe optimal parameters, where the relaxation parameters are chosen based on theoptimal parameter of the GPHSS method. In these examples, the IGPHSS methodperformed well while the GPHSS method is only better than Uzawa method for thefirst three problems. For the Stokes problem, Uzawa method needs less CPU timethan the GPHSS method in spite of more iterations. Therefore, the IGPHSS methodis more robust than the other two methods. The IGPHSS method is also comparedwith Gauss-Seidel method by pivoting the system to a system of nonzero diagonalentries with the same solution. Results suggest that even though the Gauss-Seidelmethod has the same convergence to even better than the IGPHSS method for somesimple systems, the Gauss-Seidel method does not converge for the real Stokes prob-lem because the pivoted coefficient matrix is neither symmetric positive definite nor

    Author's personal copy

  • Numer Algor

    diagonally dominant, which are sufficient convergent conditions for the Gauss-Seidelmethod [31].

    Since the optimal parameters were not used for the IGPHSS method in the numer-ical experiments, reasonably, it is anticipated that the IGPHSS method with theoptimal parameters would be much better than the other three methods. Therefore,finding the optimal parameters would be one of the further work.

    Acknowledgments The authors would like to thank the referees for their valuable comments whichhelped to improve the manuscript. The research was supported by National Natural Science Foundation ofChina (11301330), Shanghai College Teachers Visiting abroad for Advanced Study Program (B.60-A101-12-010) and the grant “The First-class Discipline of Universities in Shanghai”. This research was alsosupported by National Science Foundation (grants DMS-1115546 and DMS-1318988). The computationalresources were provided by XSEDE (funded by National Science Foundation grant ACI-1053575).

    References

    1. Bai, Z.-Z.: Optimal parameters in the HSS-like methods for saddle-point problems. Numer. LinearAlgebra Appl. 16, 447–479 (2009)

    2. Bai, Z.-Z., Golub, G.H.: Accelerated Hermitian and skew-Hermitian splitting iteration methods forsaddle-point problems. IMA J. Numer. Anal. 27, 1–23 (2007)

    3. Bai, Z.-Z., Golub, G.H., Li, C.-K.: Optimal parameter in Hermitian and skew-Hermitian splittingmethod for certain two-by-two block matrices. SIAM J. Sci. Comput. 28, 583–603 (2006)

    4. Bai, Z.-Z., Golub, G.H., Ng, M.K.: Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 24, 603–626 (2003)

    5. Bai, Z.-Z., Golub, G.H., Pan, J.-Y.: Preconditioned Hermitian and skew-Hermitian splitting methodsfor non-Hermitian positive semi-definite linear systems. Numer. Math 98, 1–32 (2004)

    6. Bai, Z.-Z., Parlett, B.N., Wang, Z.-Q.: On generalized successive overrelaxation methods foraugmented linear systems. Numer. Math 102, 1–38 (2005)

    7. Benzi, M., Golub, G.H., Liesen, J.: Numerical solution of saddle point problems. Acta Numer. 14, 1–137 (2005)

    8. Braess, D.: Finite Elements: Theory, Fast Solvers, and Applications in Solid Mechanics. CambridgeUniversity Press, Cambridge (2001)

    9. Bramble, J.H., Pasciak, J.E., Vassilev, A.T.: Analysis of the inexact Uzawa algorithm for saddle pointproblems. SIAM J. Numer. Anal. 34, 1072–1092 (1997)

    10. Cao, Y., Lin, Y., Wei, Y.: Nonlinear Uzawa methods for solving nonsymmetric saddle point problems.J. Appl. Math. Comput 21, 1–21 (2006)

    11. Fan, H.-T., Zheng, B.: A preconditioned GLHSS iteration method for non-Hermitian singular saddlepoint problems. Comput. Math. Appl 67, 614–626 (2014)

    12. Golub, G.H., Wu, X., Yuan, J.-Y.: SOR-like methods for augmented systems. BIT 41, 71–85 (2001)13. Gould, N.I.M., Scott, J.A.: A numerical evaluation of HSL packages for the direct solution of large

    sparse, symmetric linear systems of equations. ACM Trans. Math. Software 30, 300–325 (2004)14. Gresho, P.M., Sani, R.L.: Incompressible Flow and the Finite Element Method, Volume 1, Advection-

    Diffusion and Isothermal Laminar Flow. Wiley, Chichester (2000)15. Hall, E.L.: Computer Image Processing and Recognition. Academic Press, New York (1979)16. He, J., Huang, T.-Z.: Two augmentation preconditioners for nonsymmetric and indefinite saddle point

    linear systems with singular (1, 1) blocks. Comput. Math. Appl. 62, 87–92 (2011)17. Jin, X.-Q.: M-preconditioner for M-matrices. Appl. Math. Comput. 172, 701–707 (2006)18. Li, C., Li, B., Evans, D.J.: A generalized successive overrelaxation method for least squares problems.

    BIT 38, 347–355 (1998)19. Li, C., Li, Z., Evans, D.J., Zhang, T.: A note on an SOR-like method for augmented systems. IMA J.

    Numer. Anal. 23, 581–592 (2003)20. Li, J., Kong, X.: Optimal parameters of GSOR-like methods for solving the augmented linear systems.

    Appl. Math. Comput. 204, 150–161 (2008)

    Author's personal copy

  • Numer Algor

    21. Miao, S.-X., Wang, K.: On generalized stationary iterative method for solving the saddle pointproblems. J. Appl. Math. Comput. 35, 459–468 (2011)

    22. Pan, C., Wang, H.: On generalized preconditioned Hermitian and skew-Hermitian splitting methodsfor saddle point problems (in Chinese). J. Numer. Methods Comput. Appl. 32, 174–182 (2011)

    23. Perugia, I., Simoncini, V., Arioli, M.: Linear algebra methods in a mixed approximation of magneto-static problems. SIAM. J. Sci. Comput. 21, 1085–1101 (1999)

    24. Quarteroni, A., Valli, A.: Numerical Approximation of Partial Differential Equations. Springer-Verlag,Berlin (1994)

    25. Schenk, O., Gärtner, K.: On fast factorization pivoting methods for sparse symmetric indefinitesystems. Electron. Trans. Numer. Anal. 23, 158–179 (2006)

    26. Shao, X., Li, Z., Li, C.: Modified SOR-like method for the augmented system. Int. J. Comput. Math.84, 1653–1662 (2007)

    27. Shao, X., Shen, H., Li, C., Zhang, T.: Generalized AOR method for augmented systems (in Chinese).J. Numer. Methods Comput. Appl. 27, 241–248 (2006)

    28. Shen, H.-L., Shao, X.-H., Zhang, T., Li, C.-J.: Modified SOR-like method for solution to saddle pointproblem (in Chinese). J. Northeast. Univ. Nat. Sci. 30, 905–908 (2009)

    29. Simons, G., Yao, Y.-C.: Approximating the inverse of a symmetric positive definite matrix. LinearAlgebra Appl. 281, 97–103 (1998)

    30. Wright, M.H.: Interior method for constrained optimization. Acta Numer. 1, 341–407 (1992)31. Young, D.M.: Iterative Solutions of Large Linear Systems. Academic Press, New York (1971)32. Yun, J.H.: Variants of the Uzawa method for saddle point problem. Comput. Math. Appl. 65, 1037–

    1046 (2013)33. Zhang, G.-F., Lu, Q.-H.: On generalized symmetric SOR method for augmented systems. J. Comput.

    Appl. Math. 219, 51–58 (2008)

    Author's personal copy

    View publication statsView publication stats

    https://www.researchgate.net/publication/280319771

    Improved PHSS iterative methods for solving saddle point problemsAbstractIntroductionGPHSS method for augmented systemImproved GPHSS methodThe relaxation parametersNumerical experimentsConclusionAcknowledgmentsReferences