1 Friday, October 20, 2006 “Work expands to fill the time available for its completion.”...

Preview:

Citation preview

1

Friday, October 20, 2006

“Work expands to fill the time available for its

completion.”

- Parkinson’s 1st Law

2

MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int

tag, MPI_Comm comm, MPI_Status *status)

MPI_Get_count(MPI_Status *status, MPI_Datatype datatypeint *count_recvd)

Returns number of entries received in count_recvd variable.

3

Matrix Vector Multiplicationn x n matrix AVector bx=Abp processing elementsSuppose A is distributed row-wise (n/p

rows per process)Each process computes different portion

of x

4

Matrix Vector Multiplication (Initial distribution. Colors represent data distributed on different processes)

n/p rows

A b x

5

Matrix Vector Multiplication (Colors represent that all parts of b are required by each process)

n/p rows

A b x

6

Matrix Vector Multiplication (All parts of b are required by each process)

Which collective operation can we use?

7

Matrix Vector Multiplication (All parts of b are required by each process)

8

Collective communication

9

Matrix Vector Multiplicationn x n matrix AVector bx=Abp processing elementsSuppose A is distributed column-wise

(n/p columns per process)Each process computes different portion

of x.

10

Matrix Vector Multiplication (initial distribution. Colors represent data distributed on different processes)n/p cols

A b x

11

partial x0

partial x0

partial x0

partial x0

partial x1

partial x1

partial x1

partial x1

partial x2

partial x2

partial x2

partial x2

partial x3

partial x3

partial x3

partial x3

A b

x0

x1

x2

x3

x

Partial sums calculated by each process

partial x0

n/p cols

12

13

1

2

3

4

3

4

5

6

2

3

4

5

4

5

6

7

Task 0 Task 1 Task 2 Task 3

10

14

18

22

Task 1

MPI_Reduce

Element wise reduction can be done.

count=4

dest=1

14

15

16

Row-wise requires one MPI_Allgather operation.

Column-wise requires MPI_Reduce and MPI_Scatter operations.

17

Matrix Matrix Multiplication

A and B are nxn matricesp is the number of processing elementsThe matrices are partitioned into blocks of

size n/√p x n/√p

18

A B C

16 processes each represented by a different color. Different portions of the nxn matrices are divided among these processes.

19

A B C

16 processes each represented by a different color. Different portions of the nxn matrices are divided among these processes.

BUT! To compute Ci,j we need all sub-matrices Ai,k and Bk,j for 0<=k<√p

20

To compute Ci,j we need all sub-matrices Ai,k and Bk,j for 0<=k<√p

All to all broadcast of matrix A’s blocks in each row

All to all broadcast of matrix B’s blocks in each column

21

Canon’s Algorithm

Memory efficient version of the previous algorithm.

Each process in ith row requires all √p sub-matrices Ai,k 0<=k<√p

Schedule computation so that computation of √p processes in ith row use diferent Ai,k at any given time

22

A B

16 processes each represented by a different color. Different portions of the nxn matrices are divided among these processes.

23

A00 A01 A02 A03

A10 A11 A12 A13

A20 A21 A22 A23

A30 A31 A32 A33

A B C

B00 B01 B02 B03

B10 B11 B12 B13

B20 B21 B22 B23

B30 B31 B32 B33

24

A00 A01 A02 A03

A B C

Canon’s Algorithm

B00

B10

B20

B30

To compute C0,0 we need all sub-matrices A0,k and Bk,0 for 0<=k<√p

25

A01 A02 A03 A00

A B C

Canon’s Algorithm

B10

B20

B30

B00

Shift left Shift up

26

A02 A03 A00 A01

A B C

Canon’s Algorithm

B20

B30

B00

B10

Shift left Shift up

27

A03 A00 A01 A02

A B

C00

C

Canon’s Algorithm

B30

B00

B10

B20

Shift left Shift up

Sequence of √p sub-matrix multiplications done.

28

A00 A01 A02 A03

A10 A11 A12 A13

A20 A21 A22 A23

A30 A31 A32 A33

A B C

B00 B01 B02 B03

B10 B11 B12 B13

B20 B21 B22 B23

B30 B31 B32 B33

29

A00 A01 A02 A03

A10 A11 A12 A13

A20 A21 A22 A23

A30 A31 A32 A33

A B C

B00 B01 B02 B03

B10 B11 B12 B13

B20 B21 B22 B23

B30 B31 B32 B33

A01 and B01 should not be multiplied!

30

A00 A01 A02 A03

A10 A11 A12 A13

A20 A21 A22 A23

A30 A31 A32 A33

A B C

B00 B01 B02 B03

B10 B11 B12 B13

B20 B21 B22 B23

B30 B31 B32 B33

Some initial alignment required!

31

A00 A01 A02 A03

A10 A11 A12 A13

A20 A21 A22 A23

A30 A31 A32 A33

A B C

B00 B01 B02 B03

B10 B11 B12 B13

B20 B21 B22 B23

B30 B31 B32 B33

Shift all sub-matrices Ai,j to the left (with wraparound) by i steps

Shift all sub-matrices Bi,j up (with wraparound) by j steps

After circular shift operations, Pij has submatrices Ai,

(j+i)mod√p and B(i+j)mod√p, j

32

A00 A01 A02 A03

A11 A12 A13 A10

A22 A23 A20 A21

A33 A30 A31 A32

B00 B11 B22 B33

B10 B21 B32 B03

B20 B31 B02 B13

B30 B01 B12 B23

A B

After initial alignment:

33

Topologies

Many computational science and engineering problems use a series of matrix or grid operations.

The dimensions of the matrices or grids are often determined by the physical problems.

Frequently in multiprocessing, these matrices or grids are partitioned, or domain-decomposed, so that each partition is assigned to a process.

34

Topologies

MPI uses linear ordering and views processes in 1-D topology.

Although it is still possible to refer to each of the partitions by a linear rank number, a mapping of the linear process rank to a higher dimensional virtual rank numbering would facilitate a much clearer and natural computational representation.

35

Topologies

To address the needs of this MPI library provides topology routines.

Interacting processes would be identified by coordinates in that topology.

36

TopologiesEach MPI process would be mapped in

the higher dimensional topology.

Different ways to map a set of processes to a two-dimensionalgrid. (a) and (b) show a row- and column-wise mapping of these

processes, (c) shows a mapping that follows a space-filling curve

(dotted line), and (d) shows a mapping in which neighboringprocesses are directly connected in a hypercube.

37

Topologies Ideally, mapping would be determined by

interaction among processes and connectivity of physical processors.

However, mechanism for assigning ranks to MPI does not use information about interconnection network.

Reason: Architecture independent advantages of MPI (otherwise different mappings would have to be specified for different interconnection networks)

Left to MPI library to find appropriate mapping that reduces cost of sending and receiving messages.

38

MPI allows specification of virtual process topologies of in terms of a graph

Each node in graph corresponds to a process and edge exists between two nodes if they communicate with each other.

Most common topologies are Cartesian topologies (one, two or higher grids)

39

Creating and Using Cartesian Topologies

We can create Cartesian topologies using the function:

int MPI_Cart_create( MPI_Comm comm_old, int ndims,

int *dims, int *periods, int reorder, MPI_Comm

*comm_cart)

40

41

42

With processes renamed in a 2D grid topology, we are able to assign or distribute work, or distinguish among the processes by their grid topology rather than by theirlinear process ranks.

43

MPI_CART_CREATE is a collective communication function. It must be called by all processes in the group.

44

Creating and Using Cartesian Topologies Since sending and receiving messages still require (one-

dimensional) ranks, MPI provides routines to convert ranks to Cartesian coordinates and vice-versa.

int MPI_Cart_coord(MPI_Comm comm_cart, int rank, int maxdims, int *coords)

int MPI_Cart_rank(MPI_Comm comm_cart, int *coords, int *rank)

45

Creating and Using Cartesian Topologies The most common operation on Cartesian topologies is a

shifting data along a dimension of the topology.

int MPI_Cart_shift(MPI_Comm comm_cart, int dir, int s_step, int *rank_source, int *rank_dest)

MPI_CART_SHIFT is used to find two "nearby" neighbors of the calling process along a specific direction of an N-dimensional Cartesian topology.

This direction is specified by the input argument, direction, to MPI_CART_SHIFT.

The two neighbors are called "source" and "destination" ranks.

46

47

48

49

50

51

Matrix Vector Multiplication (block distribution. Colors represent data distributed on different processes)

A b x

52

Matrix Vector Multiplication (Colors represent parts of b are required by each process)

A b x

Recommended