24
Lecture 10 Matrix operations Singular value decomposition Eigensystems

Matrix operations Singular value decomposition Eigensystemshenkjan/NUMREC/matrices,eigenvectors.pdf · – singular value decomposition. 6 Gauss-Jordan elimination ... (LU decomposition

  • Upload
    others

  • View
    16

  • Download
    0

Embed Size (px)

Citation preview

  • Lecture 10

    Matrix operationsSingular value decomposition

    Eigensystems

  • 2

    Summary ODE• ordinary differential equations: to solve numerically, write in sets of first-order

    differential equations.– Adaptive stepsize: Runge-Kutta formalism

    ● Can cope with difficult regions: small steps around regions where the derivatives are large

    ● Error estimate from comparison to higher order– Bulirsh-Stoer : (Romberg for integration) : Euler-MacLaurin summation rule

    to cancel errors to powers of (h)2k

    – Good for smooth functions (number of necessary steps is reduced by about 8 orders of magnitude compared to normal first-order approximation)

    • Boundary values: complicated. Simplest case: one-point boundary conditions– Multiple-point boundary conditions: shooting and relaxation– Finite Element Analysis packages.

  • 3

    Summary ODE

    ● Exponentially growing contributions: implicit differencing!● In numerical recipes:

    – Stepper module odeint – deals with precision control and evolves the equations in time

    – Struct derivs – describes the differential equations– Stepper routines – Runge-Kutta, Bulirsh-Stoer, and implicit

    differencing (Bader-Deuflhard): perform single steps– User should provide boundary conditions and derivatives.– Example with pendulum/spring given.

  • 4

    solving coupled equations (ch.2)

    coupled equations:

    matrix form:

    11 1 12 2 1 1

    21 1 22 2 2 2

    1 1 2 2

    ......

    ........

    n n

    n n

    m m mn n m

    a x a x a x ba x a x a x b

    a x a x a x b

    MNMNMM

    N

    N

    b

    bb

    b

    x

    xx

    x

    aaa

    aaaaaa

    AbxA

    2

    1

    2

    1

    21

    22221

    11211

    ,

  • 5

    coupled equations• linearly dependent combination of equations: row degeneracy • all equations have same combination of certain variables:

    column degeneracy• M=N : unique solution may be possible

    – numerically unstable: when combinations are almost degenerate. round off errors may swamp the correct solution

    – degenerate : singular matrix (determinant = 0)– M=N : row degeneracy implies column degeneracy

    • M>N : undetermined; either 0 solutions or subspaces of solutions (typically of dimension M-N)– singular value decomposition

  • 6

    Gauss-Jordan elimination

    • pedagogical only (LU decomposition is faster)

    • x: solutions for right-hand sides b• y: inverse of a• interchanging two rows of A and the same rows of I and b does not

    change the solutions of x and y (same equations in different order)• linear combinations of rows yield also same results• interchange of columns of a-> same result if the rows of x,y, b and

    1 are also changed. (results are scrambled)

    (a00 a01 a02a10 a11 a12a20 a21 a22)⋅(x0 y00 y01 y02x1 y10 y11 y12x2 y20 y21 y2

    )=(b0 1 0 0b1 0 1 0b2 0 0 1)

  • 7

    Gauss-Jordan elimination

    • No pivoting: only linear combinations of rows are used. divide first row by a00 and eliminate all ai0 elements by subtracting the first row. Move to next column, divide second row by a11 and eliminate all ai1 elements by subtracting the right amount of the second row. Repeat until the last column-> x contains the solutions, y the inverse.

    • Diagonal element: pivot. If it is zero: solution fails. Gauss-Jordan elimination is unstable without pivoting. Pivoting: interchanging rows (partial pivoting) or rows and columns (full pivoting) to get favorable pivot in place.

    – DON'T write your own matrix inversion routines! Unless you understand this issue

    • typically : pick largest element. Implicit pivoting: normalize all rows to have the largest element equal to 1, then pick largest pivot.

  • 8

    LU-decomposition• matrix A as product of two matrices, LU:

    • ludcmp : used in many routines. In-place decomposition of the matrix. 3 times faster than Gauss-Jordan elimination. The diagonal elements of u are defined as 1. (L/c and cU are also valid decompositions)

    • Routine ludcmp used in many other NR routines.

    (L00 0 0L10 L11 0L20 L21 L22)⋅(U 00 U 01 U 020 U 11 U 120 0 U 22)=(

    a00 a10 a20a10 a11 a12a20 a21 a22)

  • 9

    LU-decomposition (ludcmp.h)

    • solution of equations:

    • solution triangular set

    yxUbyLbxULxULxA

    ,)()(

    (L00 0 0L10 L11 0L20 L21 L22)⋅(1 U 01 U 020 1 U 120 0 1 )=(

    a00 a10 a20a10 a11 a12a20 a21 a22)

    y0=b0L00

    ; y i=1lii [bi−∑j=0

    i−1

    lij y j]xN−1=

    yN−1u(N−1)(N−1)

    ; xi=1uii [ y i− ∑j=i+1

    N−1

    uij x j ]

  • 10

    LU-decomposition:

    • N2+N equations (diagonal twice); set diagonal elements of L equal to 1. Then for each j:– Solve uij (sum over N elements)

    – – then solve lij (sum over N elements)

    • implicit pivoting: division by ujj is done after calculation of all lij elements.• LU-decomposition: 1/3N3 steps (3 times better than Gauss-Jordan). Solving for inverse:

    N3 steps (same as Gauss-Jordan).

    0 0

    0 0 1 1

    0 0 1 1

    0 0 1 1

    ....

    .........

    ij i j

    i j i j ii ij ij

    i j i j ij jj ij

    i i i i ii ii ii

    a l ui j l u l u l u ai j l u l u l u ai j l u l u l u a

    uij=aij−∑k=0

    i−1

    likukj

    lij=1u jj [aij−∑k=0

    j−1

    likukj ]

  • 11

    LU-decomposition

    • advantages: – the solution for different right-hand sides can be obtained after

    decomposing the matrix.– the matrix can be decomposed in place – no extra storage needed– for solving 1 equation, it is 3 times faster than Gauss-Jordan

    elimination• For a tri-diagonal systems of equations:

    – L-U decomposition takes only N steps (routine tridag)– forward and backward substitution also takes only N steps.

    (implemented in spline, Householder, ...)

  • 12

    Eigensystems• NxN matrix A has eigenvectors x and eigenvalues c, for which A.x=cx. This

    implies :det |A-c1|=0.

    • Expanding the determinant gives an n-th order polynomial, of which the roots form the N eigenvalues (not necessarily distinct).– eigenvalues may be shifted by amount t by adding a constant t.x to both the matrix (t.unity

    matrix) and the eigenvector. E.g. eigen energies can be off-set by an arbitrary amount.

    dagdag

    dag

    TTjiij

    dag

    T

    AAAAAA

    IAAAAaaAA

    AA

    normalunitary

    orthogonal)(Hermitian

    symmetric

    1

    *

  • 13

    Eigensystems• Hermitian: eigenvalues are all real.• normal: orthogonal set of eigenvectors (spanning the full

    range rank N).• Left and right eigenvectors:

    • left and right eigenvalues: the same.xAxxAx T of transposers,EigenvectoLeft

    1

    1

    0

    1

    0

    0

    0;

    0

    0

    RLL

    n

    L

    n

    RR XXXAXXXA

  • 14

    Eigensystems

    • hence we get

    • similarity transformation leaves the eigenvalues intact. Any matrix with complete eigenvectors can be diagonalized with similarity transformations; the resulting transformation matrix contains the eigenvectors. If these are real and orthogonal, we have also orthogonal matrices Z (the transpose is the inverse).

    1

    01

    0

    0

    N

    RR XAX

    ZAZA 1

  • 15

    Eigensystems

    • Strategy: diagonalize (or close to diagonalize) matrix A by similarity transformations.

    • When only the eigenvalues needed it is enough to get the matrix in triangular form.

    • eigensystem packages: in ISML, NAG• two techniques: diagonalize element by element

    (Jacobi) or column by column (Householder), or use factorization methods.

    nRnn PPXPAPPPA 111

    11

  • 16

    Jacobi transformation

  • 17

    Jacobi transformation

    • matrix Ppq only changes rows p and q, the transform only columns p and q.

    • Zero off-diagonal elements:

    • set pq-element to zero by choosing angle theta. Convergence: off-diagonal elements decrease

    2 2

    2 2

    2 2

    '

    '

    ' 2

    ' 2

    ' ( ) ( )qq

    rp rp rq

    rq rq rp

    pp pp qq pq

    qq qq pp pq

    pq pq pp

    a ca saa ca sa

    a c a s a sca

    a c a s a sca

    a c s a sc a a

  • 18

    Jacobi transformation

    • Ultimately, one can diagonalize A to machine accuracy. • Cyclic Jacobi transformation: just annihilate element by

    element. convergence is quadratic: (N*(N-1)/2) Jacobi transformations per sweep. Transformation itself is of order N.

    • Looks cumbersome, but is numerically quite accurate.

  • 19

    Householder method• symmetric matrices- tridiagonal form.• Householder method: N-2 transformations

    needed.

    • choose x first column of matrix a, and

    • Householder reduces all elements on a given vector except the first one

    1|| with vector,real 21 2 wwwwP T

    0

    20

    ||

    ||21;||;1

    exuxxP

    uHexxuH

    uuPT

  • 20

    Householder transformation

    • choose vector x to be the lower (n-1) elements of column 1. Then:

    • with k plus or minus the magnitude of the vector u (= (0, a1,..an-1) ).

    • From this, we obtain• (using P=PT)

    rz

    yxkaaaa

    A

    ppp

    ppAP

    nm

    nnn

    m

    n

    00

    000

    0001 1000100

    1,11,1

    1

    1,1111

    gz

    yxkka

    PAPA

    00

    0000

  • 21

    Householder transformation

    • Next, choose bottom n-2 elements of column 1 for the vector x, the second Householder matrix becomes then

    • The identity matrix in the first 2 columns/rows ensures that the tridiagonal result of the previous step is not spoiled. The second transform adds the next row/column of zeros.

  • 22

    Householder transformation

    • Routine in Numerical Recipes:– actually starts at column n. The sign of the vector in u=x +/-

    |x|e0 is chosen to minimize round-off error. The most accurate result is obtained when the matrix a is permuted such that the largest elements are in the bottom right corner and the smallest elements in the top-left corner

    • Tri-diagonal eigenvalues and eigenvectors:– obtained with QL-algorithm– Jacobi-like transformation– cubic convergence (due to shifting)– eigenvalues: typically ~ 20 N2 steps, eigenvectors: O(N3)

  • 23

    NR3 Householder(+ Jacobi)• Hermitean case: Householder and Jacobi methods are in eigen_sym.h• Example:#include “nr3.h”#include “eigen_sym.h”MatDoub a(10,10); // make a 10x10 matrix to be analyzed… fill the matrix elementsSymmeig eigen(a); // create Symmeig object eigen, the matrix a will be symmetrized // Eigenvalues are in eigen.d. offdiagonal elements in eigen.e, the full matrix of

    eigenvectors in eigen.zfor (int i=0;i

  • 24

    Hermitian matrices

    • Complex analogues to the Householder routines given in Numerical recipes can be found for Hermitian matrices. As an alternative, one can convert the Hermitian problem into a real, symmetric one:

    • Hermitian: typically in solving Schroedinger equation | |i jO

    C=A+iB ; (A+iB)⋅(u+iv )=λ(u+iv )

    [A −BB A ]⋅[uv ]=λ[uv ]

    Slide 1(16) Ordinary Differential Eqs.Slide 3solving coupled equations (ch.2)coupled equationsGauss-Jordan eliminationSlide 7LU-decompositionLU-decomposition (ludcmp.cpp)LU-decomposition:Slide 11EigensystemsSlide 13Slide 14Slide 15Jacobi transformationSlide 17Slide 18Householder methodSlide 20Slide 21Slide 22NR 3 Householder(+ Jacobi)Hermitian matrices