25
CSCI 317 Mike Heroux 1 Sparse Matrix Computations CSCI 317 Mike Heroux

Sparse Matrix Computations

  • Upload
    stash

  • View
    49

  • Download
    0

Embed Size (px)

DESCRIPTION

Sparse Matrix Computations. CSCI 317 Mike Heroux. Matrices. Matrix (defn): (not rigorous) An m -by- n , 2 dimensional array of numbers. Examples: . 1.0 2.0 1.5 A = 2.0 3.0 2.5 1.5 2.5 5.0. a 11 a 12 a 13 A = a 21 a 22 a 23 a 31 a 32 a 33. Sparse Matrices. - PowerPoint PPT Presentation

Citation preview

Page 1: Sparse Matrix Computations

CSCI 317 Mike Heroux 1

Sparse Matrix Computations

CSCI 317Mike Heroux

Page 2: Sparse Matrix Computations

CSCI 317 Mike Heroux 2

Matrices• Matrix (defn): (not rigorous) An m-by-n, 2

dimensional array of numbers. • Examples:

1.0 2.0 1.5A = 2.0 3.0 2.5

1.5 2.5 5.0

a11 a12 a13

A = a21 a22 a23

a31 a32 a33

Page 3: Sparse Matrix Computations

CSCI 317 Mike Heroux 3

Sparse Matrices• Sparse Matrix (defn): (not rigorous) An m-by-n

matrix with enough zero entries that it makes sense to keep track of what is zero and nonzero.

• Example: a11 a12 0 0 0 0 a21 a22 a23 0 0 0

A = 0 a32 a33 a34 0 0 0 0 a43 a44 a45 0 0 0 0 a54 a55 a56

0 0 0 0 a65 a66

Page 4: Sparse Matrix Computations

4

Dense vs Sparse Costs• What is the cost of storing the tridiagonal

matrix with all entries?• What is the cost if we store each of the

diagonals as vector?• What is the cost of computing y = Ax for

vectors x (known) and y (to be computed):– If we ignore sparsity?– If we take sparsity into account?

CSCI 317 Mike Heroux

Page 5: Sparse Matrix Computations

5

Origins of Sparse Matrices• In practice, most large matrices are sparse. Specific sources:

– Differential equations.• Encompasses the vast majority of scientific and engineering simulation.• E.g., structural mechanics.

– F = ma. Car crash simulation.– Stochastic processes.

• Matrices describe probability distribution functions.– Networks.

• Electrical and telecommunications networks.• Matrix element aij is nonzero if there is a wire connecting point i to

point j.– And more…

CSCI 317 Mike Heroux

Page 6: Sparse Matrix Computations

6

Example: 1D Heat Equation (Laplace Equation)

• The one-dimensional, steady-state heat equation on the interval [0,1] is as follows:

• The solution u(x), to this equation describes the distribution of heat on a wire with temperature equal to a and b at the left and right endpoints, respectively, of the wire.

''( ) 0, 0 1.(0) .(1) .

u x xu au b

= < <==

CSCI 317 Mike Heroux

Page 7: Sparse Matrix Computations

7

Finite Difference Approximation• The following formula provides an

approximation of u”(x) in terms of u:

• For example if we want to approximate u”(0.5) with h = 0.25:

22

( ) 2 ( ) ( )"( ) ( )2

u x h u x u x hu x O hh

+ − + −= +

( )2

(0.75) 2 (0.5) (0.25)"(0.5)2 1 4

u u uu − +≈

CSCI 317 Mike Heroux

Page 8: Sparse Matrix Computations

8

1D Grid

x0= 0 x4= 1x1= 0.25 x2= 0.5 x3= 0.75x:

u(x):u(0)=a

= u0u(0.25)=u1 u(0.5)=u2 u(0.75)=u3

u(1)=b = u4

• Note that it is impossible to find u(x) for all values of x.• Instead we:• Create a “grid” with n points.• Then find an approximate to u at these grid points.

• If we want a better approximation, we increase n.

Interval:

Note: • We know u0 and u4 .

• We know a relationship between the ui via the finite difference equations.

• We need to find ui for i=1, 2, 3.CSCI 317 Mike Heroux

Page 9: Sparse Matrix Computations

9

What We Know

0 1 2 1 22

2 3 4 3 22

1 1 1 12

Left endpoint:1 ( 2 ) 0, or 2 .

Right endpoint:1 ( 2 ) 0, or 2 .

Middle points:1 ( 2 ) 0, or 2 0 for 2.i i i i i i

u u u u u ah

u u u u u bh

u u u u u u ih − + − +

− + = − =

− + = − =

− + = − + − = =

CSCI 317 Mike Heroux

Page 10: Sparse Matrix Computations

10

Write in Matrix Form

1

2

3

2 1 01 2 1 0

0 1 2

u auu b

−⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥− − =⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥−⎣ ⎦⎣ ⎦ ⎣ ⎦

• This is a linear system with 3 equations and three unknowns.• We can easily solve.• Note that n=5 generates this 3 equation system.• In general, for n grid points on [0, 1], we will have n-2 equations and unknowns.

CSCI 317 Mike Heroux

Page 11: Sparse Matrix Computations

11

General Form of 1D Finite Difference Matrix

1

2

3

1

2 1 0 0 01 2 1 0 0 0

0 0 01

0 0 0 1 2 n

u auu

u b−

− ⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥− − ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ ⎢ ⎥− ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦⎣ ⎦

O O OMM O O O M

CSCI 317 Mike Heroux

Page 12: Sparse Matrix Computations

12

A View of More Realistic Problems

• The previous example is very simple.• But basic principles apply to more complex

problems.• Finite difference approximations exist for

any differential equation.• Leads to far more complex matrix patterns.• For example…

CSCI 317 Mike Heroux

Page 13: Sparse Matrix Computations

13

“Tapir” Matrix (John Gilbert)

CSCI 317 Mike Heroux

Page 14: Sparse Matrix Computations

14

Corresponding Mesh

CSCI 317 Mike Heroux

Page 15: Sparse Matrix Computations

15

Sparse Linear Systems:Problem Definition

• A frequent requirement for scientific and engineering computing is to solve:

Ax = bwhere A is a known large (sparse) matrix a linear operator,

b is a known vector,x is an unknown vector.

NOTE: We are using x differently than before.Previous x: Points in the interval [0, 1].New x: Vector of u values.

• Goal: Find x.• Method: We will look at two different methods: Jacobi and

Gauss-Seidel.

CSCI 317 Mike Heroux

Page 16: Sparse Matrix Computations

16

Iterative Methods• Given an initial guess for x, called x(0), (x(0) = 0 is

acceptable) compute a sequence x(k), k = 1,2, … such that each x(k) is “closer” to x.

• Definition of “close”:– Suppose x(k) = x exactly for some value of k.– Then r(k) = b – Ax(k) = 0 (the vector of all zeros).– And norm(r(k)) = sqrt(<r(k), r(k)>) = 0 (a number).– For any x(k), let r(k) = b – Ax(k)

– If norm(r(k)) = sqrt(<r(k), r(k)>) is small (< 1.0E-6 say) then we say that x(k) is close to x.

– The vector r is called the residual vector.CSCI 317 Mike Heroux

Page 17: Sparse Matrix Computations

17

Linear Conjugate Gradient Methods

scalar product<x,y> defined by vector space

vector-vector operations

linear operator applications

Scalar operations

Types of operations Types of objectsLinear Conjugate Gradient Algorithm

1 1 2 2, * * *n nv w v w v w v w= + + +L

CSCI 317 Mike Heroux

Page 18: Sparse Matrix Computations

CSCI 317 Mike Heroux 18

General Sparse Matrix

• Example:

a11 0 0 0 0 a16

0 a22 a23 0 0 0 A = 0 a32 a33 0 a35 0

0 0 0 a44 0 0 0 0 a53 0 a55 a56

a61 0 0 0 a65 a66

Page 19: Sparse Matrix Computations

CSCI 317 Mike Heroux 19

Compressed Row Storage (CRS) FormatIdea: • Create 3 length m arrays of pointers

1 length m array of ints :double ** values = new double *[m];double ** diagonals = new double*[m];int ** indices = new int*[m];int * numEntries = new int[m];

Page 20: Sparse Matrix Computations

CSCI 317 Mike Heroux 20

Compressed Row Storage (CRS) FormatFill arrays as follows:for (i=0; i<m; i++) { // for each row

numEntries[i] = numRowEntries, number of nonzero entries in row i.values[i] = new double[numRowEntries];indices[i] = new int[numRowEntries];for (j=0; j<numRowEntries; j++) { // for each entry in row i

values[i][j] = value of jth row entry.indices[i][j] = column index of jth row entry.if (i==column index) diagonal[i] = &(values[i][j]);

}

}

Page 21: Sparse Matrix Computations

CSCI 317 Mike Heroux 21

CRS Example (diagonal omitted)

A =

4 0 0 10 3 0 20 0 6 05 0 9 8

⎜⎜⎜⎜

⎟⎟⎟⎟

num Entries[0] = 2, values[0] = 4 1⎡⎣ ⎤⎦, indices[0] = 0 3⎡⎣ ⎤⎦num Entries[1] = 2, values[1] = 3 2⎡⎣ ⎤⎦ , indices[1] = 1 3⎡⎣ ⎤⎦num Entries[2] = 1, values[2] = 6⎡⎣ ⎤⎦, indices[2] = 2⎡⎣ ⎤⎦num Entries[3] = 3, values[3] = 5 9 8⎡⎣ ⎤⎦, indices[3] = 0 2 3⎡⎣ ⎤⎦

Page 22: Sparse Matrix Computations

CSCI 317 Mike Heroux 22

Matrix, Scalar-Matrix andMatrix-Vector Operations

• Given vectors w, x and y, scalars alpha and beta and matrices A and B we define:

• matrix trace:– alpha = tr(A)

α = a11 + a22 + ... + ann

• matrix scaling:– B = alpha * A

bij = α aij

• matrix-vector multiplication (with update):– w = alpha * A * x + beta * y

wi = α (ai1 x1 + ai2 x2 + ... + ain xn) + βyi

Page 23: Sparse Matrix Computations

CSCI 317 Mike Heroux 23

Common operations (See your notes)

• Consider the following operations: – Matrix trace.– Matrix scaling.– Matrix-vector product.

• Write mathematically and in C/C++.

Page 24: Sparse Matrix Computations

CSCI 317 Mike Heroux 24

Complexity• (arithmetic) complexity (defn) The total

number of arithmetic operations performed using a given algorithm.– Often a function of one or more parameters.

• parallel complexity (defn) The number parallel operations performed assuming an infinite number of processors.

Page 25: Sparse Matrix Computations

CSCI 317 Mike Heroux 25

Complexity Examples (See your notes)

• What is the complexity of:– Sparse Matrix trace?– Sparse Matrix scaling.– Sparse Matrix-vector product?

• What is the parallel complexity of these operations?