qrmethod (2)

Embed Size (px)

Citation preview

  • 7/23/2019 qrmethod (2)

    1/13

    Lecture notes in numerical linear algebra

    QR algorithm

    x2QR algorithm

    We saw in the previous lectures that a Schur factorization of a matrixA Cnn directly gives us the eigenvalues. More precisely, if we cancomputeP andUsuch that

    A = PU P,

    whereP

    P = Iand Uis upper triangular, then the eigenvalues ofAare given by the diagonal elements ofU. The QR method developed by Francisin1960s[4]is classified as one of thetop-ten developments in computationin the20th century. The performanceand robustness is still actively improvedwithin the numerical linear algebraresearch community.

    We will now introduce the QR-method, which is sometimes calledthe QR-algorithm or Franciss QR-steps [4]. The goal of the methodis to compute a Schur factorization by means of similarity transfor-mations. The total complexity of the algorithm is essentiallyO(n3),which can only be achieved in practice after several improvementsare appropriately taken into account. Most of this chapter is devotedto improving the basic QR-method. The fundamental results for theconvergence are based on connections with the power method andsimultaneous iteration and will be covered later in the course.

    Although the QR-method can be successfully adapted to arbitrarycomplex matrices, we will here for brevity concentrate the discussionon the case where the matrix has only real eigenvalues.

    x2.1 Basic variant of QR-method

    As the name suggests, the QR-method is tightly coupled with theQR-factorization. Consider for the moment a QR-factorization of thematrixA,

    A = QR

    whereQQ = Iand R is upper triangular. We will now reverse theorder of multiplication product ofQand R and eliminateR ,

    RQ = QAQ. (2.1)

    SinceQAQis a similarity transformation ofA, RQhas the sameeigenvalues as A. More importantly, we will later see that by re-peating this process, the matrixRQwill become closer and closer to

    Lecture notes - Elias Jarlebring - Autumn 20141

    version:2014-11-28

  • 7/23/2019 qrmethod (2)

    2/13

    Lecture notes in numerical linear algebra

    QR algorithm

    upper triangular, such that we eventually can read off the eigenvaluesfrom the diagonal. That is, the QR-method generates a sequence of Idea ofbasic QR-method: compute a

    QR-factorization and reverse the orderof multiplcation ofQandR .

    matricesAkinitiated with A0 =A and given by

    Ak=RkQk,

    whereQkand Rkrepresents a QR-factorization ofAk1,

    Ak1 =QkRk.

    Algorithm1Basic QR algorithmInput: A matrix A Cnn

    Output: MatricesUand Tsuch that A = UTU.Set A0 =A andU0 =Ifork=1, . . .do

    Compute QR-factorization: Ak1 =QkRkSet Ak =RkQk

    SetUk =Uk1Qkend for

    ReturnT=Aand U=U

    Although the complete understanding of convergence of the QR-method requires theory developed in later parts of this course, itis instructive to already now formulate the character of the conver-gence. Let us assume the eigenvalues are distinct in modulus andordered as1 >2 > >n. Under certain assumptions, theelements of the matrixAkbelow the diagonal will converge to zeroaccording to

    a(k)ij = O(ijk)for alli >j. (2.2)

    Example (basic QR-method)

    We conduct a simple matlab experiment to illustrate the convergence.To that end we construct a matrix with eigenvalues 1,2,...,10 andrun the basic QR-method.

    D=diag(1:10);

    rand(seed,0);

    S=rand(10); S=(S-0.5)*2;A=S*D*inv(S);

    for i=1:100

    [Q,R]=qr(A);

    A=R*Q

    end

    norm(sortrows(diag(A))-sortrows(diag(D)))

    Lecture notes - Elias Jarlebring - Autumn 20142

    version:2014-11-28

  • 7/23/2019 qrmethod (2)

    3/13

    Lecture notes in numerical linear algebra

    QR algorithm

    In the last step, this program returns 5 105, suggesting that we dofind reasonably accurate eigenvalues. This is confirmed by the factthat the matrixAdoes indeed approach an upper triangular matrix,which can be seen in the following illustration ofAk.

    k=1 k=3 k=5 k=7 k=9 k=11 k=13 k=15 k=17 k=19 k=21

    0

    2

    4

    6

    8

    10

    Moreover, the illustration suggests that Akapproaches a triangularmatrix where the diagonal elements are ordered by (descending)magnitude. Since the diagonal elements of a triangular matrix are theeigenvalues, our observation is herea(k)11 1 =10, a

    (k)22 2 =9, .. .,

    a(k)10 10 =1 (which we will be able to show later in this course).

    We can also observe the convergence claimed in (2.2) by consider-ing the quotient of the elements below the diagonal of two consecu-

    tive iterations. Let A20and A21be the matrices generated afterk=20andk=21 iterations. In MATLAB notation we have

    >> A21./A20

    ans =

    1.0002 0.9313 1.0704 0.9854 1.0126 0.9929 0.9952 0.9967 1.2077 -1.0087

    0.9118 1.0003 0.9742 1.0231 0.8935 1.0534 1.0614 1.0406 1.0082 -0.9828

    0.8095 0.8917 1.0003 2.7911 1.4191 0.9689 0.9999 0.9508 0.9941 -1.0260

    0.7005 0.7718 0.8676 0.9992 0.9918 1.0509 0.2971 1.0331 0.8996 -0.2654

    0.5959 0.6746 0.7411 0.8436 1.0001 1.0022 0.9901 1.0007 0.9650 -1.0036

    0.5013 0.5602 0.6303 0.7224 0.8309 0.9997 1.0349 0.9901 0.9993 -1.0113

    0.4005 0.4475 0.4993 0.5904 0.6669 0.7908 1.0000 1.0035 1.0022 -0.9996

    0.3007 0.3344 0.3738 0.4355 0.5002 -1.9469 0.7460 0.9998 1.0006 -1.0007

    0.2002 0.2227 0.2493 0.2899 0.3332 0.4044 0.4994 0.6660 1.0003 -0.9994-0.1001 -0.1119 -0.1259 -0.1426 -0.1669 -0.1978 -0.2500 -0.3332 -0.5000 1.0000

    The elements below the diagonal in the output are consistent with(2.2) in the sense that the(i,j)element of the output is

    a(21)i,ja(20)i,j ij, i j + 1.Downsides with the basic QR-method

    Although the basic QR-method in general converges to a Schur fac-torization whenk , it is not recommended in practice. The basic QR-method is often slow, in

    the sense that the number of iterations

    required to reach convergence is ingeneral high. It is in general expensivein the sense that the computationaleffort required per step is high.

    Disadvantage1. One step of the basic QR-method is relatively ex-pensive. More precisely,

    the complexity of one step of the basic QR-method= O(n3).Hence, even in an overly optimistic situation where the number ofsteps would be proportional ton, the algorithm would need O(n4)operations.

    Lecture notes - Elias Jarlebring - Autumn 20143

    version:2014-11-28

  • 7/23/2019 qrmethod (2)

    4/13

    Lecture notes in numerical linear algebra

    QR algorithm

    Disadvantage2. Usually, many steps are required to have conver-gence; certainly much more thann. In fact, the basic QR-methodcan be arbitrarily slow if the eigenvalues are close to each other. Itis often slow in practice.

    x2.2 The two-phase approach

    The disadvantages of the basic QR-method suggest that several im-provements are required to reach a competitive algorithm. The Hes-senberg matrix structure will be crucial for these improvements.

    Definition2.2.1 A matrix H Cnn is called aHessenberg matrixif itselements below the lower off-diagonal are zero, Structure of a Hessenberg matrix:

    H=

    0

    0 0

    hi,j =0 when i >j + 1.

    The matrix H is called anunreduced Hessenberg matrixif additionallyhi,i+1 0for all i =1, . . . , n 1.

    Our improved QR-method will be an algorithm consisting of twophases, illustrated as follows:

    Phase1

    Phase2

    Phase1. (Section 2.2.1). In the first phase we will compute a Hessen-

    berg matrixH(and orthogonalU) such that

    A = U HU.

    Unlike the Schur factorization (A = UTU whereTis upper tri-angular) such a reduction can be done with a finite number ofoperations.

    Phase2. (Section 2.2.2). In the second phase we will apply the basicQR-method to the matrixH. It turns out that several improve-ments can be done when applying the basic QR-method to a Hes-

    senberg matrix such that the complexity of one step is O(n2),instead ofO(n3)in the basic version.2.2.1 Phase 1. Hessenberg reduction

    In the following it will be advantageous to use the concept of House-holder reflectors.

    Lecture notes - Elias Jarlebring - Autumn 20144

    version:2014-11-28

  • 7/23/2019 qrmethod (2)

    5/13

    Lecture notes in numerical linear algebra

    QR algorithm

    Definition2.2.2 A matrix P Cnn of the form

    P =I 2uu where u Cn andu =1is called aHouseholder reflector.

    Px

    u

    x

    Figure2.1: A Householder reflectoris a reflector in the sense that themultiplication ofP with a vector givesthe mirror image with respect to the(hyper) plane perpendicular to u.

    Some properties of Householder reflectors:

    A Householder reflector is always hermitianP = P

    SinceP2 = I, a Householder reflector is always orthogonal andP1 =P =P

    Ifuis given, the corresponding Householder reflector can be mul-tiplied by a vector in O(n)operations:

    Px = x 2u(ux) (2.3)Householder reflectors satisfying Px =e

    1We will now explicitly solve this problem: Given a vector x, constructa Householder reflector such that Pxis a multiple of the first unitvector. This tool will be used several times throughout this chapter.

    Lemma2.2.3 Suppose x Rm{0}and let = 1and =x. Letu =

    x e1x e1 =zz , (2.4)

    where The choice ofuin (2.4)is the normalvector of a plane such that the mirrorimage ofx is in the direction of the firstunit vector.z =x e1 =

    x1 xx2

    xn

    Then, the matrix P = I 2uuT is a Householder reflector and

    Px = e1.

    Proof The matrixP is a Householder reflector since u is normalized.From the definition ofzand we havezTz =(x e1)T(x e1) =2(x2 xx1). Similarly,zTx =x2 xx1. Hence

    uuTx =z

    z

    zT

    z

    x =

    zTx

    zTzz =

    12

    z

    and(I 2uuT)x = x z = e1, which is the conclusion of the lemma.

    In principle,can be chosen freely (provided = 1). The choice = sign(x1), is often better from the perspective of round-off errors.With this specific choice of, Lemma2.2.3also holds in complexarithmetic.

    Lecture notes - Elias Jarlebring - Autumn 20145

    version:2014-11-28

  • 7/23/2019 qrmethod (2)

    6/13

    Lecture notes in numerical linear algebra

    QR algorithm

    Repeated application of Householder reflectors

    By carrying outn 2 orthogonality transformations with (cleaverlyconstructed) Householder reflectors we will now be able to reducethe matrix A to a Hessenberg matrix.

    In the first step we will carry out a similarity transformation witha Householder reflector P1, given as follows Note: I 2u1uT1 C(n1)(n1) is the

    Householder reflector associated withu1 C

    n1 andP1 Cnn in(2.5) is theHouseholder reflector associated with[0, uT1 ] C

    n.P1 =

    1 0 0 0 00 0 0 0

    =1 0T

    0 I 2u1uT1 . (2.5)

    The vectoru1will be constructed from elements of the matrixA.More precisely, they will be given by (2.4) withx T =[a21, . . . , an1]suchthat the associated Householder reflector satisfies

    (I 2u1uT1 )a21

    an1

    =e1.

    Hence, multiplyingA from the left withP1inserts zeros in desiredpositions in the first column,

    P1A =

    In order to have a similarity transformation, we must also multiply

    from the right withP1. Recall that P

    1 = P1since Householder re-flectors are Hermitian. Because of the non-zero structure in P1thenon-zero structure is unchanged and we have a matrixP1AP1whichis similar toA and has desired zero-entries, In the first step of the Hessenberg

    reduction: A similarity transformationis constructed such that P1AP1 has thedesired (Hessenberg) zero-structure inthe first column.

    P1AP

    1 =P1AP1 =

    .

    The process can be repeated and in the second step we set

    P2 =

    1 0 0T

    0 1 0T

    0 0 I 2u2uT2

    whereu2is constructed from the n 2 last elements of the secondcolumn ofP1AP1.

    Lecture notes - Elias Jarlebring - Autumn 20146

    version:2014-11-28

  • 7/23/2019 qrmethod (2)

    7/13

    Lecture notes in numerical linear algebra

    QR algorithm

    P1AP1 =

    mult. fromleft withP2

    mult. fromright withP2

    =P2P1AP1P2

    Aftern 2 steps we have completed the Hessenberg reduction since

    Pn2Pn3P1AP1P2Pn2 =H,

    whereHis a Hessenberg matrix. Note that U=P1P2Pn1is orthogo-nal and A = UHU andHhave the same eigenvalues.

    The complete algorithm is provided in Algorithm2.2.1.In the al-gorithm we do not explicitly compute the orthogonal matrixUsinceit is not required unless also eigenvectors are to be computed. Inevery step in the algorithm we need O

    (n2

    )operations and conse-

    quently

    the total complexity of the Hessenberg reduction = O(n3).The Hessenberg reduction in Algo-rithm2.2.1is implemented by overwrit-ing the elements ofA . In this way lessmemory is required.

    Algorithm2Reduction to Hessenberg formInput: A matrix A Cnn

    Output: A Hessenberg matrix Hsuch that H=UAU.fork=1, . . . , n 2do

    Computeukusing (2.4)wherex T =[ak+1,k, , an,k]Compute PkA: Ak+1n,kn =Ak+1n,kn 2uk

    (ukAk+1n,kn

    )Compute PkAPk: A1n,k+1n =A1n,k+1n 2(A1n,k+1nuk)ukend forLetHbe the Hessenberg part ofA.

    2.2.2 Phase 2. Hessenberg QR-method

    In the second phase we will apply the QR-method to the outputof the first phase, which is a Hessenberg matrix. Considerable im-provements of the basic QR-method will be possible because of thestructure.

    Let us start with a simple example:

    >> A=[1 2 3 4; 4 4 4 4;0 1 -1 1; 0 0 2 3]

    A =

    1 2 3 4

    4 4 4 4

    0 1 -1 1

    0 0 2 3

    Lecture notes - Elias Jarlebring - Autumn 20147

    version:2014-11-28

  • 7/23/2019 qrmethod (2)

    8/13

    Lecture notes in numerical linear algebra

    QR algorithm

    >> [Q,R]=qr(A);

    >> A=R*Q

    A =

    5.2353 -5.3554 -2.5617 -4.2629

    -1.3517 0.2193 1.4042 2.4549

    0 2.0757 0.6759 3.6178

    0 0 0.8325 0.8696

    The code corresponds to one step of the basic QR method appliedto a Hessenberg matrix. Note that the result is also a Hessenbergmatrix. This obsevation is true in general. (The proof is postponed.)

    A basic QR-step preserves the Hes-

    senberg structure. Hence, the basicQR-method also preserves the Hessen-

    berg structure.

    Theorem2.2.4 If the basic QR-method (Algorithm 2.1) is applied to a

    Hessenberg matrix, then all iterates Akare Hessenberg matrices.

    We will now explicitly characterize a QR-step for Hessenberg matri-ces, which can be used in all subsequent QR-steps.

    Fortunately, the QR-decomposition of a Hessenberg matrix has aparticular structure which indeed be characterized and exploited. Tothis end we use the concept of Givens rotations.

    ej

    ei

    x

    Gx

    Figure2.2: A Givens rotation G =G(i,j,cos(),sin())is a rotation inthe sense that the application onto avector corresponds to a rotation withangle in the plane spanned bye iandej.

    Definition2.2.5 The matrix G(i,j, c, s) Rnn corresponding to a Givensrotation is defined by

    G(i,j, c, s) =I+ (c 1)eieTi seieTj + sejeTi + (c 1)ejeTj =

    I

    c s

    I

    s cI

    (2.6)

    We note some properties of Givens rotations:

    G(i,j, c, s)is an orthogonal matrix. G(i,j, c, s) =G(i,j, c, s) The operationG(i,j, c, s)xcan be carried out by only modifying

    two elements ofx,

    G(i,j, c, s)

    x1

    xi1xi

    xi+1

    xj1xj

    xj+1

    xn

    =

    x1

    xi1cxisxj

    xi+1

    xj1sxj+csi

    xj+1

    xn

    (2.7)

    The QR-factorization of Hessenberg matrices can now be explicitlyexpressed and computed with Givens rotations as follows.

    Lecture notes - Elias Jarlebring - Autumn 20148

    version:2014-11-28

  • 7/23/2019 qrmethod (2)

    9/13

    Lecture notes in numerical linear algebra

    QR algorithm

    Theorem2.2.6 Suppose A Cnn is a Hessenberg matrix. Let Hi be

    generated as follows H1 =A

    Hi+1 =GTi Hi, i =1, . . . , n 1

    where Gi =G(i, i + 1, (Hi)i,iri, (Hi)i+1,iri)and ri = (Hi)2i,i + (Hi)2i+1,iand we assume ri 0. Then, Hn is upper triangular and Theorem2.2.6implies that theQ-matrixin a QR-factorization of a Hessenbergmatrix can be factorized as a product ofn 1 Givens rotations.

    A =(G1G2Gn1)Hn =QRis a QR-factorization of A.

    Proof It will be shown that the matrix Hiis a reduced Hessenbergmatrix where the firsti 1 lower off-diagonal elements are zero. Theproof is done by induction. The start of the induction for i = 1 istrivial. Suppose Hiis a Hessenberg matrix with(Hi)k+1,k = 0 fork =

    1, . . . , i

    1. Note that the application ofGionly modifies theithand(i + 1)st rows. Hence, it is sufficient to show that(Hi+1)i+1,i =0.This can be done by inserting the formula for G in (2.6),

    Use the definition ofc i =(Hi)i,irandsi =(Hi)i+1,ir

    (Hi+1)i+1,i = eTi+1GTi Hiei= eTi+1[I+ (ci 1)eieTi + (ci 1)ei+1eTi+1

    siei+1eTi + sieie

    Ti+1]Hiei

    = (Hi)i+1,i + (ci 1)(Hi)i+1,i si(Hi)i,i= ci(Hi)i+1,i si(Hi)i,i =0

    By induction we have shown that Hnis a triangular matrix and Hn =

    GTn1Gn2GT1Aand G1Gn1Hn =A.

    Idea Hessenberg-structure exploitation:Use factorization ofQ-matrix in termsof product of Givens rotations in orderto computeRQ with less operations.

    The theorem gives us an explicit form for theQmatrix. The theo-rem also suggests an algorithm to compute a QR-factorization byapplying the Givens rotations and reachingR = Hn. Since the appli-cation of a Givens rotator can be done in O(n), we can compute QR-factorization of a Hessenberg matrix in O(n2)with this algorithm. Inorder to carry out a QR-step, we now reverse the multiplication ofQandR which leads to

    Ak+1=

    RQ=

    Q

    AkQ=

    HnG1

    Gn1

    The application ofG1Gn1can also be done in Oand consequently

    the complexity of one Hessenberg QR step = O(n2)The algorithm is summarized in Algorithm2.2.2.In the algorithm[c,s]=givens(a,b)returnsc = aa2 + b2 ands = b(a2 + b2).

    Lecture notes - Elias Jarlebring - Autumn 20149

    version:2014-11-28

  • 7/23/2019 qrmethod (2)

    10/13

    Lecture notes in numerical linear algebra

    QR algorithm

    Algorithm3Hessenberg QR algorithmInput: A Hessenberg matrix A Cnn

    Output: Upper triangular Tsuch that A = UTU for an orthogonalmatrixU.

    Set A0 =

    Afork=1, . . .do// One Hessenberg QR stepH=Ak1fori =1, . . . , n 1do[ci, si] =givens(hi,i, hi+1,i)

    Hii+1,in =ci sisi ci

    Hii+1,inend for

    fori =1, . . . , n 1do

    H1i+1,ii+1 =H1i+1,ii+1

    ci sisi ci

    end forAk=H

    end for

    ReturnT=A

    x2.3 Acceleration with shifts and deflation

    In the previous section we saw that the QR-method for computingthe Schur form of a matrix A can be executed more efficiently if thematrixAis first transformed to Hessenberg form.

    The next step in order to achieve a competitive algorithm is toimprove the convergence speed of the QR-method. We will now seethat the convergence can be dramatically improved by consideringa QR-step applied to the matrix formed by subtracting a multiply ofthe identity matrix. This type of acceleration is called shifting.

    First note the following result for singular Hessenberg matrices. TheR-matrix in the QR-decompositionof a singular unreduced Hessenbergmatrix has the structure

    R =

    0

    .

    Lemma2.3.1 Suppose H Cnn is an irreducible Hessenberg matrix.

    Let QR = H be a QR-decomposition of H. If H is singular, then the last

    diagonal element of R is zero,

    rn,n =0.

    As a justification for the shifting procedure, suppose for the momentthatis an eigenvalue of the irreducible Hessenberg matrix H. Wewill now characterize the consequence of shiftingH, applying onestep of the QR-method and subsequently reverse the shifting:

    H I = QR (2.8a)H = RQ + I (2.8b)

    Lecture notes - Elias Jarlebring - Autumn 201410

    version:2014-11-28

  • 7/23/2019 qrmethod (2)

    11/13

    Lecture notes in numerical linear algebra

    QR algorithm

    By rearringing the equations, we find that

    H=RQ + I=Q(H I)Q + I=QHQ.Hence, similar to the basic QR-method, one shifted QR step (2.8)

    also corresponds to a similarity transformation. They do howevercorrespond to different similarity transformations since theQ-matrixis generated from the QR-factorization ofH Iinstead ofH.

    The result of (2.8) has more structure. The shifted QR-step(2.8), where is aneigenvalue ofH, generates a reducedhessenberg matrix with structure

    H =

    .

    Lemma2.3.2 Suppose is an eigenvalue of the Hessenberg matrix H. LetH be the result of one shifted QR-step(2.8). Then,

    hn,n1 = 0

    hn,n = .

    Proof The matrix H Iis singular since is an eigenvalue ofH.From Lemma2.3.1we have that rn,n =0. The product RQin (2.8b)implies that the last row ofRQis zero. Hence, hn,n =andhn,n1 =0.

    Since Hhas the eigenvalue , we conclude that the variant of the QR-step involving shifting in(2.8), generates an exact eigenvalue afterone step.

    Rayleigh quotient shifts

    The shifted QR-method can at least in theory be made to give an ex-act eigenvalue after only one step. Unfortunately, we cannot choose

    perfect shifts as in(2.8) since we obviously do not have the eigenval-ues of the matrix available. During the QR-method we do howeverhave estimates of the eigenvalues. In particular, if the off diagonal el-ements ofHare sufficiently small, the eigenvalues may be estimatedby the diagonal elements. In the heuristic called theRayleigh quotient The nameRayleigh qoutient shiftscomes

    from the fact that there is a connectionwith the Rayleigh qoutient iteration.

    shiftwe select the last diagonal element ofHas a shift,

    k=h(k1)n,n .

    The shifted algorithm is presented in Algorithm 2.3. The steps in- Deflation here essentially means thatonce an eigenvalue is detected to beof sufficient accuracy, the iteration iscontinued with a smaller matrix from

    which the converged eigenvalue has (ina sense) been removed.

    volving a QR factorization can also be executed with Givens rotationsas in Section2.2.2.The algorithm features a type of deflation; instead

    of carrying out QR-steps on n nmatrices, oncen meigenvalueshave converged we iterate with a matrix of sizem m.

    x2.4 Further reading

    The variant of the QR-method that we presented here can be im-proved considerably in various ways. Many of the improvements

    Lecture notes - Elias Jarlebring - Autumn 201411

    version:2014-11-28

  • 7/23/2019 qrmethod (2)

    12/13

    Lecture notes in numerical linear algebra

    QR algorithm

    Algorithm4Hessenberg QR algorithm with Rayleigh quotient shiftand deflationInput: A Hessenberg matrix A Cnn

    Set H(0) =Aform = n, . . . , 2do

    k=0repeat

    k= k+ 1k=h

    (k1)m,m

    Hk1 kI=QkRkHk =RkQk + kI

    untilh(k)m,m1is sufficiently smallSaveh(k)m,mas a converged eigenvalueSet H(0) =H(k)1(m1),1(m1) C

    (m1)(m1)

    end for

    already carefully implemented in the high performance computingsoftware such as LAPACK [1]. If the assumption that the eigenvaluesare real is removed, several issues must be addressed. IfA is real, thebasic QR method will in general only converge to a real Schur form,whereR is only block triangular. In the acceleration with shifting, itis advantageous to use double shifts that preserve the complex con-jugate structure. The speed can be improved by introducing deflationat an early, see aggressive early deflation cite-kressner. Various as-pects of the QR-method is given more attention in the text-books. In

    [3]the QR-method is presented including further discussion of con-nections with the LR-method and numerical stability. Some of thepresentation above was inspired by the online material of a course atETH Zrich [2]. The book of Watkins[5] has a recent comprehensivediscussion of the QR-method, with a presentation which does notinvolve the basic QR-method.

    x2.5 Bibliography

    [1] E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Don-

    garra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney,and D. Sorensen. LAPACK Users Guide. Society for Industrial andApplied Mathematics, Philadelphia, PA, third edition,1999.

    [2] P. Arbenz. The course252-0504-00G, "Numerical Methods forSolving Large Scale Eigenvalue Problems", ETH Zrich. onlinematerial,2014.

    Lecture notes - Elias Jarlebring - Autumn 201412

    version:2014-11-28

  • 7/23/2019 qrmethod (2)

    13/13

    Lecture notes in numerical linear algebra

    QR algorithm

    [3] G. Dahlquist and . Bjrck. Numerical Methods in Scientific Com-puting Volume II. Springer,2014.

    [4] J. Francis. The QR transformation. a unitary analogue to the LRtransformation. I, II. Comput. J., 4:265271,332345, 1961.

    [5] D. S. Watkins. Fundamentals of matrix computations. 3rd ed. Wiley,2010.

    Lecture notes - Elias Jarlebring - Autumn 201413

    version:2014-11-28