14
Special Matrices

unit 4 special matrices - Stanford Universityweb.stanford.edu/class/cs205l/assets/unit_4_special_matrices.pdf•Since det$=det$&, either row or column diagonal dominance is enough

  • Upload
    others

  • View
    19

  • Download
    0

Embed Size (px)

Citation preview

Page 1: unit 4 special matrices - Stanford Universityweb.stanford.edu/class/cs205l/assets/unit_4_special_matrices.pdf•Since det$=det$&, either row or column diagonal dominance is enough

Special Matrices

Page 2: unit 4 special matrices - Stanford Universityweb.stanford.edu/class/cs205l/assets/unit_4_special_matrices.pdf•Since det$=det$&, either row or column diagonal dominance is enough

(Strict) Diagonal Dominance

• The magnitude of each diagonal element is (either):• strictly larger than the sum of the magnitudes of all the other elements in its row• strictly larger than the sum of the magnitudes of all the other elements in its column

• One may row/column scale and permute rows/columns to achieve diagonally dominance (since it’s just a rewriting of the equations)• Recall: choosing the form of the equations wisely is important

• E.g. consider 3 −25 1

&'&( = 9

4• Switch rows 5 1

3 −2&'&( = 4

9 and column scale 5 −23 4

&'−.5&( = 4

9

Page 3: unit 4 special matrices - Stanford Universityweb.stanford.edu/class/cs205l/assets/unit_4_special_matrices.pdf•Since det$=det$&, either row or column diagonal dominance is enough

(Strict) Diagonal Dominance

• Strictly diagonally dominant (square) matrices are guaranteed to be non-singular• Since det $ = det $& , either row or column diagonal dominance is enough

• Column diagonal dominance guarantees that pivoting is not required during '(factorization• However, pivoting still improves robustness

• E.g. consider 4 3−2 50 where 50 is more desirable than 4 for /00

Page 4: unit 4 special matrices - Stanford Universityweb.stanford.edu/class/cs205l/assets/unit_4_special_matrices.pdf•Since det$=det$&, either row or column diagonal dominance is enough

Inner Product

• Consider the space of all vectors with length !• The dot/inner product of two vectors is " ⋅ $ = ∑' "'$'• The magnitude of a vector is $ ( = $ ⋅ $ (≥ 0)• Alternative notations: < ", $ > = " ⋅ $ = ".$

• Weighted inner product defined via an /0/ matrix 1• < ", $ >2 = " ⋅ 1$ = ".1$• Since < $, " >2= $.1" = ".1.$, weighted inner products commute when 1 is

symmetric• The standard dot product uses identity matrix weighting: < ", $ > = < ", $ >3

Page 5: unit 4 special matrices - Stanford Universityweb.stanford.edu/class/cs205l/assets/unit_4_special_matrices.pdf•Since det$=det$&, either row or column diagonal dominance is enough

Definiteness

• AssumeAissymmetricsothat< 0, 2 >4 = < 2, 0 >4

• 6 is positive definite if and only if < 2, 2 >4= 2762 > 0 for ∀2 ≠ 0• 6 is positive semi-definite if and only if < 2, 2 >4= 2762 ≥ 0 for ∀2 ≠ 0• We abbreviate with SPD and SP(S)D

• 6 is negative definite if and only if < 2, 2 >4= 2762 < 0 for ∀2 ≠ 0• 6 is negative semi-definite if and only if < 2, 2 >4= 2762 ≤ 0 for ∀2 ≠ 0• If 6 is negative (semi) definite, then −6 is positive (semi) definite (and vice versa)• Thus, can convert such problems to SPD or SP(S)D instead

• 6 is considered indefinite when it is neither positive/negative semi-definite

Page 6: unit 4 special matrices - Stanford Universityweb.stanford.edu/class/cs205l/assets/unit_4_special_matrices.pdf•Since det$=det$&, either row or column diagonal dominance is enough

Eigenvalues

• SPD matrices have all eigenvalues > 0• SP(S)D matrices have all eigenvalues ≥ 0

• Symmetric negative definite matrices have all eigenvalues < 0• Symmetric negative semi-definite matrices have all eigenvalues ≤ 0

• Indefinite matrices have both positive and negative eigenvalues

Page 7: unit 4 special matrices - Stanford Universityweb.stanford.edu/class/cs205l/assets/unit_4_special_matrices.pdf•Since det$=det$&, either row or column diagonal dominance is enough

Recall: SVD Construction (Important Detail)

• Let A = 1 00 −1 so that &'& = &&' = (, and thus ) = * = Σ = (

• But & ≠ )Σ*' = ( What’s wrong?• Given a column vector -. of *,&-. = )1*'-. = )13̂. = )4.3̂. = 4.5. where 5. is the corresponding column of )• &-6 = 1 0

0 −110 = 1

0 = 56 but &-7 = 1 00 −1

01 = 0

−1 ≠ 01 = 57

• Although eigenvectors may be scaled by an arbitrary constant, the constraint that ) and * be orthonormal forces their columns to be unit length• However, there are still two choices for the direction of each column

• Multiplying 57 by −1 to get 57 = 0−1 makes ) = &,andthus & = )Σ*' as

desired

Page 8: unit 4 special matrices - Stanford Universityweb.stanford.edu/class/cs205l/assets/unit_4_special_matrices.pdf•Since det$=det$&, either row or column diagonal dominance is enough

Symmetric Matrices (SVD)

• Since !"! = !!" = !$, both the columns of % and the columns of & are eigenvectors of !$• They are identical (but potentially opposite) directions: '( = ±*(• Thus, !*( = +('( implies !*( = ±+(*(• That is, the *( (and '() are eigenvectors of ! with eigenvalues ±+(

• Similar to the polar SVD, can pull negative signs out of the columns of % into the +( to obtain % = & and ! = &Λ&" as a modified SVD• ! = &Λ&" implies !& = &Λ which is the matrix form of the eigensystem of !• Here, Λ contains the positive and negative eigenvalues of !

Page 9: unit 4 special matrices - Stanford Universityweb.stanford.edu/class/cs205l/assets/unit_4_special_matrices.pdf•Since det$=det$&, either row or column diagonal dominance is enough

SPD Matrices

• When ! is SP(S)D, Λ = Σ and the standard SVD is ! = %Σ%& (i.e. ' = %)

• The singular values are the (all positive) eigenvalues of ! (since !% = %Σ)• Constructing % with det % = 1 (as usual) and obtaining all ,- > 0 implies that

there are no reflections• Since all ,- > 0, SPD matrices have full rank and are invertible

• SP(S)D (and not SPD) has at least one ,- = 0 and a null space• Often, one can use modified SPD techniques for SP(S)D matrices• Unfortunately, indefinite matrices are significantly more challenging

Page 10: unit 4 special matrices - Stanford Universityweb.stanford.edu/class/cs205l/assets/unit_4_special_matrices.pdf•Since det$=det$&, either row or column diagonal dominance is enough

Making/Breaking Symmetry

• Row/column scaling can break symmetry:• Row scaling 5 3

3 −4 by −2 gives a non-symmetric 5 3−6 8

• Additional column scaling by −2 gives 5 −6−6 −16

• Scaling the same row/column together in the same way preserves symmetry

• Important: a nonsymmetric matrix might be inherently symmetric when properly rescaled/rearranged

Page 11: unit 4 special matrices - Stanford Universityweb.stanford.edu/class/cs205l/assets/unit_4_special_matrices.pdf•Since det$=det$&, either row or column diagonal dominance is enough

Rules Galore

• There are many rules/theorems regarding special matrices (especially for SPD)• It is important to be aware of reference material (and to look things up)

• Examples: • SPD matrices don’t require pivoting during !" factorization• A symmetric (strictly) diagonally dominant matrix with positive diagonal entries is positive

definite• Jacobi and Gauss-Seidel iteration converge when a matrix is strictly (or irreducibly)

diagonally dominant• Etc.

Page 12: unit 4 special matrices - Stanford Universityweb.stanford.edu/class/cs205l/assets/unit_4_special_matrices.pdf•Since det$=det$&, either row or column diagonal dominance is enough

Cholesky Factorization

• SPD matrices have !" factorization of !!# and don’t require elimination to find it

• Consider $%% $&%$&% $&& = (%% 0

(&% (&&(%% (&%0 (&& = (%%& (%%(&%

(%%(&% (&%& + (&&&• So (%% = $%% and (&% = +,-

.--and (&& = $&& − (&%&

for(j=1,n){for(k=1,j-1) for(i=j,n) $01 −= $02$12;$11 = $11; for(k=j+1,n) $21/= $11;}

\\ For each column j of the matrix\\ Loop over all previous columns k, and subtract a multiple of column k from the current column j\\ Take the square root of the diagonal entry, and scale column j by that value

• This algorithm factors the matrix “in place” replacing 5 with !

Page 13: unit 4 special matrices - Stanford Universityweb.stanford.edu/class/cs205l/assets/unit_4_special_matrices.pdf•Since det$=det$&, either row or column diagonal dominance is enough

Incomplete Cholesky Preconditioner

• Cholesky factorization can be used to construct a preconditioner for a sparse matrix• The full Cholesky factorization would fill in too many non-zero entries• So, incomplete Cholesky preconditioning uses Cholesky factorization with the

caveat that only the nonzero entries are modified (all zeros remain zeros)

Page 14: unit 4 special matrices - Stanford Universityweb.stanford.edu/class/cs205l/assets/unit_4_special_matrices.pdf•Since det$=det$&, either row or column diagonal dominance is enough

Symmetric Approximation

• For non-symmetric !, a symmetric "! = $% (! + !

() averages off-diagonal components• Solving the symmetric "!* = + instead of the non-symmetric !* = + gives a

faster/easier (though erroneous) approximation to a problem that might not require too much accuracy• Alternatively, the inverse of the symmetric "! (or the notion thereof) may be used

to devise a preconditioner for !* = +