113
Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Embed Size (px)

Citation preview

Page 1: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Introduction to PETSc

VIGRE Seminar, Wednesday, November 8, 2006

Page 2: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

How (basically) does it work?

Page 3: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

How (basically) does it work?• Assign each processor a number

Page 4: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

How (basically) does it work?• Assign each processor a number

• The same program goes to all

Page 5: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

How (basically) does it work?• Assign each processor a number

• The same program goes to all

• Each uses separate memory

Page 6: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

How (basically) does it work?• Assign each processor a number

• The same program goes to all

• Each uses separate memory

• They pass information back and forth as necessary

Page 7: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

Example 1: Matrix-Vector Product

Page 8: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

Example 1: Matrix-Vector Product

and

are inputs

into the program.

0:

1:

2:

Page 9: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

Example 1: Matrix-Vector Product

The control node (0) reads in the matrix and distributes the rows amongst the processors.

0: (a, b, c)

1: (d, e, f)

2: (g, h, i)

Page 10: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

Example 1: Matrix-Vector Product

The control node also sends the vector to each processor’s memory.

0: (a, b, c) ; (j, k, l)

1: (d, e, f) ; (j, k, l)

2: (g, h, i) ; (j, k, l)

Page 11: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

Example 1: Matrix-Vector Product

Each processor computes its own dot product.

0: (a, b, c) • (j, k, l) = aj+bk+cl

1: (d, e, f) • (j, k, l) = dj+ek+fl

2: (g, h, i) • (j, k, l) = gj+hk+il

Page 12: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

Example 1: Matrix-Vector Product

The processors send their results to the control node, which outputs

.

0: (a, b, c) • (j, k, l) = aj+bk+cl

1: (d, e, f) • (j, k, l) = dj+ek+fl

2: (g, h, i) • (j, k, l) = gj+hk+il

Page 13: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

Example 2: Matrix-Vector Product

Suppose for memory reasons each processor only has part of the vector.

0: (a, b, c) ; j

1: (d, e, f) ; k

2: (g, h, i) ; l

Page 14: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

Example 2: Matrix-Vector Product

Before the multiply, each processor sends the necessary information elsewhere.

0: (a, b, c) ; j ; (k from 1) ;

(l from 2)

1: (d, e, f) ; (j from 0) ; k ;

(l from 2)

2: (g, h, i) ; (j from 0) ;

(k from 1) ; l

Page 15: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

Example 2: Matrix-Vector Product

After the multiply, the space is freed again for other uses.

0: (a, b, c) ; j

1: (d, e, f) ; k

2: (g, h, i) ; l

Page 16: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

Example 3: Matrix-Matrix Product

The previous case illustrates how to multiply matrices stored across multiple processors.

0: (a, b, c) ; (j, k, l)

1: (d, e, f) ; (m, n, o)

2: (g, h, i) ; (p, q, r)

Page 17: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

Example 3: Matrix-Matrix Product

Each column is distributed for processing in turn.

1) (a, b, c)•(j, m, p)=α0: 2) (a, b, c)•(k, n, q)=β 3) (a, b, c)•(l, o, r)=γ

1) (d, e, f)•(j, m, p)=δ1: 2) (d, e, f)•(k, n, q)=ε 3) (d, e, f)•(l, o, r)=ζ

1) (g, h ,i)•(j, m, p)=η2: 2) (g, h, i)•(k, n, q)=θ 3) (g, h, i)•(l, o, r)=ι

Page 18: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

Example 3: Matrix-Matrix Product

The result is a matrix with the same parallel row structure as the first matrix and column structure as the right.

0: (α, β, γ)

1: (δ, ε, ζ)

2: (η, θ, ι)

Page 19: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

Example 3: Matrix-Matrix Product

The original indices could also have been sub-matrices, as long as they were compatible.

0: (α, β, γ)

1: (δ, ε, ζ)

2: (η, θ, ι)

Page 20: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

Example 4: Block Diagonal Product

Suppose the second matrix is block diagonal.

0: (A, B, C) ; (J, 0, 0)

1: (D, E, F) ; (0, K, 0)

2: (G, H, I) ; (0, 0, L)

Page 21: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

Example 4: Block Diagonal Product

Much less information needs to be passed between the processors.

1) AJ=α0: 2) BK=β 3) CL=γ

1) DJ=δ1: 2) EK=ε 3) FL=ζ

1) GJ=η2: 2) HK=θ 3) IL=ι

Page 22: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

When is it worth it to parallelize?

Page 23: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

When is it worth it to parallelize?• There is a time cost associated with

passing messages

Page 24: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

When is it worth it to parallelize?• There is a time cost associated with

passing messages

• The amount of message passing is dependent on the problem and the program (algorithm)

Page 25: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Computing

When is it worth it to parallelize?• Therefore, the benefits depend more

on the structure of the problem and the program than on the size/speed of the parallel network (diminishing returns).

Page 26: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Networks

How do I use multiple processors?

Page 27: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Networks

How do I use multiple processors?

• This depends on the network, but…

• Most networks use some variation of PBS, a job scheduler, and mpirun or mpiexec.

Page 28: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Networks

How do I use multiple processors?

• This depends on the network, but…

• Most networks use some variation of PBS, a job scheduler, and mpirun or mpiexec.

• A parallel program needs to be submitted as a batch job.

Page 29: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Networks

• Suppose I have a program myprog, which gets data from data.dat, which I call in the following fashion when only using one processor:

./myprog –f data.datI would write a file myprog.pbs that looks like the following:

Page 30: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Networks

#PBS –q compute (name of the processing queue [not necessary on all networks])

#PBS -N myprog (the name of the job)#PBS –l nodes=2:ppn=1,walltime=00:10:00 (number of nodes and number of processes per node,

maximum time to allow the program to run)#PBS -o /home/me/mydir/myprog.out (where the

output of the program should be written)#PBS -e /home/me/mydir/myprog.err (where the

error stream should be written)These are the headers that tell the job scheduler how to handle your job.

Page 31: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Networks

Although what follows depends on the MPI software that the network runs, it should look something like this:

cd $PBS_O_WORKDIR (makes the processors run the program in the directory where myprog.pbs is saved)

mpirun –machinefile $PBS_NODEFILE –np 2 myprog –f mydata.dat (tells the MPI software

which processes to use and how many processes to start: notice that command line arguments follows as usual)

Page 32: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Networks

• Once the .pbs file is written, it can be submitted to the job scheduler with qsub:

qsub myprog.pbs

Page 33: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Networks

• Once the .pbs file is written, it can be submitted to the job scheduler with qsub:

qsub myprog.pbs• You can check to see if your job is

running with the command qstat.

Page 34: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Networks

• Some systems (but not all) will allow you to simulate running your program in parallel on one processor, which is useful for debugging:

mpirun –np 3 myprog –f mydata.dat

Page 35: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Networks

What parallel systems are available?

Page 36: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Networks

What parallel systems are available?

• RTC : Rice Terascale Cluster: 244 processors.

Page 37: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Networks

What parallel systems are available?

• RTC : Rice Terascale Cluster: 244 processors.

• ADA : Cray XD1: 632 processors.

Page 38: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

Parallel Networks

What parallel systems are available?

• RTC : Rice Terascale Cluster: 244 processors.

• ADA : Cray XD1: 632 processors.

• caamster: CAAM department exclusive: 8(?) processors.

Page 39: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

What do I use PETSc for?

Page 40: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

What do I use PETSc for?

• File I/O with “minimal” understanding of MPI

Page 41: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

What do I use PETSc for?

• File I/O with “minimal” understanding of MPI

• Vector and matrix based data management (in particular: sparse)

Page 42: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

What do I use PETSc for?

• File I/O with “minimal” understanding of MPI

• Vector and matrix based data management (in particular: sparse)

• Linear algebra routines familiar from the famous serial packages

Page 43: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

• At the moment, ada and caamster (and harvey) have PETSc installed

Page 44: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

• At the moment, ada and caamster (and harvey) have PETSc installed

• You can download and install PETSc on your own machine (requires cygwin for Windows), for educational and debugging purposes

Page 45: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

• PETSc builds on existing software BLAS and LAPACK: which implementations to use can be specified at configuration

Page 46: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

• PETSc builds on existing software BLAS and LAPACK: which implementations to use can be specified at configuration

• Has (slower) debugging configuration and (faster, tacit) optimized configuration

Page 47: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

• Installation comes with documentation, examples, and manual pages.

Page 48: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

• Installation comes with documentation, examples, and manual pages.

• The biggest part of learning how to use PETSc is learning how to use the manual pages.

Page 49: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

• It is extremely useful to have an environmental variable PETSC_DIR in you shell of choice, which gives the path to the installation of PETSc, e.g.

PETSC_DIR=/usr/local/src/petsc-2.3.1-p13/export PETSC_DIR

Page 50: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Makefile

Page 51: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Makefile

• You can pretty much copy/paste/modify the makefiles in the examples, but here is the basic setup:

Page 52: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Makefile

(…) (Other definitions for CFLAGS, etc.)LOCDIR = ~/mydir

include ${PETSC_DIR}/bmake/common/base(This is why it is useful to have this variable saved)myprog: myprog.o chkopts -${CLINKER} -o myprog myprog.o

${PETSC_LIB} ${RM} myprog.o

Page 53: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Headers

Page 54: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Headers

•#include “petsc.h” in all files, unless the routines that you use need more specific headers.

Page 55: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Headers

•#include “petsc.h” in all files, unless the routines that you use need more specific headers.

• How do you know? Consult the manual pages!

Page 56: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Data Types

Page 57: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Data Types

• PETSc has a slew of its own data types: PetscInt, PetscReal, PetscScalar, etc.

Page 58: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Data Types

• PETSc has a slew of its own data types: PetscInt, PetscReal, PetscScalar, etc.

• Usually aliases of normal data types: PetscInt ~ int, PetscReal ~ double

Page 59: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Data Types

• PETSc has a slew of its own data types: PetscInt, PetscReal, PetscScalar, etc.

• Usually aliases of normal data types: PetscInt ~ int, PetscReal ~ double

• Safer to use for compatibility

Page 60: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Usage in C/C++

Page 61: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Usage in C/C++

• The top program should begin:

Static char[] help=“Your message here.”

int main(int argc,char **argv){

(… declarations)

PetscInitialize(&argc,&argv,PETSC_NULL,help)

Page 62: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Usage in C/C++

• The top program should end:

(…)

PetscFinalize();

return(0);

}

Page 63: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Usage in C/C++

• When first programming, include the following variable:

PetscErrorCode ierr;

• Where you’d call a PETSc routine,Routine(arg);

write insteadierr=Routing(arg);CHKERRQ(ierr);

Page 64: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Usage in C/C++

• When you try to run your program, you will be informed of any problems with incompatible data types/dimensions/etc.

Page 65: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Data

• Anything data type larger than a scalar has a Create and a Destroy routine.

Page 66: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Data

• Anything data type larger than a scalar has a Create and a Destroy routine.

• If you run ./myprog –log_summary, you get # created and # destroyed for each data type, to find memory leaks.

Page 67: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

Page 68: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Two types: global and local

Page 69: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Two types: global and local

• Dependent on function: do other processors need to see this data?

Page 70: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Two types: global and local

• Dependent on function: do other processors need to see this data?

• Basic usage:Vec X;VecCreate([PETSC_COMM_WORLD/PETSC_COMM_SELF],&X);

Page 71: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Advanced usage:

Page 72: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Advanced usage:VecCreateSeq(PETSC_COMM_SELF,n,&X);

Page 73: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Advanced usage:VecCreateSeq(PETSC_COMM_SELF,n,&X);VecCreateSeqWithArray(PETSC_COMM_SELF,n,vals,&X);

Page 74: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Advanced usage:VecCreateSeq(PETSC_COMM_SELF,n,&X);VecCreateSeqWithArray(PETSC_COMM_SELF,n,vals,&X);VecLoad(instream,VECSEQ,&X);

Page 75: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Advanced usage:VecCreateSeq(PETSC_COMM_SELF,n,&X);VecCreateSeqWithArray(PETSC_COMM_SELF,n,vals,&X);VecLoad(instream,VECSEQ,&X);VecCreateMPI(PETSC_COMM_WORLD,n,PETSC_DETERMINE,&X);

Page 76: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Advanced usage:VecCreateSeq(PETSC_COMM_SELF,n,&X);VecCreateSeqWithArray(PETSC_COMM_SELF,n,vals,&X);VecLoad(instream,VECSEQ,&X);VecCreateMPI(PETSC_COMM_WORLD,n,PETSC_DETERMINE,&X);VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,N,&X);

Page 77: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Advanced usage:VecCreateSeq(PETSC_COMM_SELF,n,&X);VecCreateSeqWithArray(PETSC_COMM_SELF,n,vals,&X);VecLoad(instream,VECSEQ,&X);VecCreateMPI(PETSC_COMM_WORLD,n,PETSC_DETERMINE,&X);VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,N,&X);VecCreateMPIWithArray(PETSC_COMM_WORLD,n,N,vals,&X);

Page 78: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Advanced usage:VecCreateSeq(PETSC_COMM_SELF,n,&X);VecCreateSeqWithArray(PETSC_COMM_SELF,n,vals,&X);VecLoad(instream,VECSEQ,&X);VecCreateMPI(PETSC_COMM_WORLD,n,PETSC_DETERMINE,&X);VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,N,&X);VecCreateMPIWithArray(PETSC_COMM_WORLD,n,N,vals,&X);VecLoad(instream,VECMPI,&X);

Page 79: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Advanced usage:VecCreateSeq(PETSC_COMM_SELF,n,&X);VecCreateSeqWithArray(PETSC_COMM_SELF,n,vals,&X);VecLoad(instream,VECSEQ,&X);VecCreateMPI(PETSC_COMM_WORLD,n,PETSC_DETERMINE,&X);VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,N,&X);VecCreateMPIWithArray(PETSC_COMM_WORLD,n,N,vals,&X);VecLoad(instream,VECMPI,&X);VecDuplicate(Y,&X);

Page 80: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Advanced usage:VecCreateSeq(PETSC_COMM_SELF,n,&X);VecCreateSeqWithArray(PETSC_COMM_SELF,n,vals,&X);VecLoad(instream,VECSEQ,&X);VecCreateMPI(PETSC_COMM_WORLD,n,PETSC_DETERMINE,&X);VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,N,&X);VecCreateMPIWithArray(PETSC_COMM_WORLD,n,N,vals,&X);VecLoad(instream,VECMPI,&X);VecDuplicate(Y,&X);MatGetVecs(M,&X,PETSC_NULL);

Page 81: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Advanced usage:VecCreateSeq(PETSC_COMM_SELF,n,&X);VecCreateSeqWithArray(PETSC_COMM_SELF,n,vals,&X);VecLoad(instream,VECSEQ,&X);VecCreateMPI(PETSC_COMM_WORLD,n,PETSC_DETERMINE,&X);VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,N,&X);VecCreateMPIWithArray(PETSC_COMM_WORLD,n,N,vals,&X);VecLoad(instream,VECMPI,&X);VecDuplicate(Y,&X);MatGetVecs(M,&X,PETSC_NULL);MatGetVecs(M,PETSC_NULL,&X);

Page 82: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• If not created with array or loaded from file, values still needed:

Page 83: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• If not created with array or loaded from file, values still needed

• To copy the values of another Vec, with the same parallel structure, use VecCopy(Y,X).

Page 84: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• If not created with array or loaded from file, values still needed

• To copy the values of another Vec, with the same parallel structure, use VecCopy(Y,X).

• To set all values to a single scalar value, use VecSet(X,alpha).

Page 85: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• There are routines for more complicated ways to set values

Page 86: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• There are other routines for more complicated ways to set values

• PETSc guards the block of data where the actual values are stored very closely

Page 87: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• There are other routines for more complicated ways to set values

• PETSc guards the block of data where the actual values are stored very closely

• An assembly routine must be called after these other routines

Page 88: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Other routines:

Page 89: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Other routines:VecSetValue

Page 90: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Other routines:VecSetValueVecSetValueLocal (different indexing used)

Page 91: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Other routines:VecSetValueVecSetValueLocal (different indexing used)

VecSetValues

Page 92: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Other routines:VecSetValueVecSetValueLocal (different indexing used)

VecSetValuesVecSetValuesLocal

Page 93: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Other routines:VecSetValueVecSetValueLocal (different indexing used)

VecSetValuesVecSetValuesLocalVecSetValuesBlocked

Page 94: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Other routines:VecSetValueVecSetValueLocal (different indexing used)

VecSetValuesVecSetValuesLocalVecSetValuesBlockedVecSetValuesBlockedLocal

Page 95: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Once a vector is assembled, there are routines for (almost) every function we could want from a vector: AXPY, dot product, absolute value, pointwise multiplication, etc.

Page 96: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Vec

• Once a vector is assembled, there are routines for (almost) every function we could want from a vector: AXPY, dot product, absolute value, pointwise multiplication, etc.

• Call VecDestroy(X) to free its array when it isn’t needed anymore.

Page 97: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Mat

Page 98: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Mat

• Like Vec, a Mat can be global or local (MPI/Seq)

Page 99: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Mat

• Like Vec, a Mat can be global or local (MPI/Seq)

• A Mat can take on a large number of data structures to optimize * and \, even though the same routine is used on all structures.

Page 100: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Example: Mat

• Row compressed

• Block row compressed

• Symmetric block row compressed

• Block diagonal

• And even dense

Page 101: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

File I/O

Page 102: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

File I/O

• The equivalent to a stream is a viewer.

Page 103: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

File I/O

• PETSc has equivalent routines to printf, but you must decide if you want every node to print or just the control node

Page 104: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

File I/O

• PETSc has equivalent routines to printf, but you must decide if you want every node to print or just the control node

• To ensure clarity when multiple nodes print, use PetscSynchronizedPrintf followed by PetscSynchronizedFlush.

Page 105: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

File I/O

• The equivalent to a stream is a “viewer”, but a viewer organizes data across multiple processors.

Page 106: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

File I/O

• The equivalent to a stream is a “viewer”, but a viewer organizes data across multiple processors.

• A viewer combines an output location (file/stdout/stderr), with a format.

Page 107: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

File I/O

• The equivalent to a stream is a “viewer”, but a viewer organizes data across multiple processors.

• A viewer combines an output location (file/stdout/stderr), with a format.

• Most data types have a View routine such as MatView(M,viewer)

Page 108: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

File I/O

• On a batch server, ASCII I/O can be horrendously slow.

Page 109: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

File I/O

• On a batch server, ASCII I/O can be horrendously slow.

• PETSc only reads into a parallel format data which is stored in binary form.

Page 110: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

File I/O

• On a batch server, ASCII I/O can be horrendously slow.

• PETSc only reads into a parallel format data which is stored in binary form.

• Lots of output data is likely: binary is more compressed than ASCII.

Page 111: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

I have ASCII input data: solution?

Page 112: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

I have ASCII input data: solution?

• Write a wrapper program

• Runs on one processor

• Creates the data to be used in parallel, and “views” it to a binary input file

• In parallel, it will be automatically distributed

Page 113: Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

PETSc

Next Time, Issues for Large Dynamical Systems:

• Time Stepping

• Updating algebraically

• Managing lots of similar equations (Scattering/Gathering)