CS4402 – Parallel Computing Lecture 1: Classification of Parallel Computers Classification of...

Preview:

Citation preview

CS4402 – Parallel Computing

Lecture 1:Classification of Parallel Computers

Classification of Parallel Computation

Important Laws of Parallel Compuation

How I used to make breakfast……….

How to set family to work...

How finally got to the office in time….

What is Parallel Computing?

In the simplest sense, parallel computing is the simultaneous use of multiple computing resources to solve a problem.

Parallel computing is the solution for "Grand Challenge Problems“: weather and climate biological, human genome chemical and nuclear reactions

Parallel Computing is a necessity for some commercial applications: parallel databases, data mining computer-aided diagnosis in medicine

Ultimately, parallel computing is an attempt to minimize time.

Grand Challenges Problems

List of Supercomputers

Find this information at

http://www.top500.org/

Reason 1: Speedup

Reason 2: Economy

Resources already available. Taking advantage of non-local resources Cost savings - using multiple "cheap" computing resources instead of

paying for time on a supercomputer.

A parallel system is cheaper than a better processor. Transmission speeds. Limits to miniaturization. Economic limitations.

Reason 3: Scalability

13

14

Types of || Computers

Parallel Computers

Hardware Software

Shared memory

Distributed memory

Hybrid memory

SIMD MIMD

17

The Banking Analogy

Tellers: Parallel Processors

Customers: tasks Transactions: operations Accounts: data

Vector/Array

Each teller/processor gets a very fine-grained task

Use pipeline parallelism

Good for handling batches when operations can be broken down into fine-grained stages

SIMD (Single-Instruction-Multiple-Data)

All processors do the same things or idle

Phase 1: data partitioning and distributed

Phase 2: data-parallel processing

Efficient for big, regular data-sets

Systolic Array

Combination of SIMD and Pipeline parallelism

2-d array of processors with memory at the boundary

Tighter coordination between processors

Achieve very high speeds by circulating data among processors before returning to memory

MIMD(Multi-Instruction-Multiple-Data)

Each processor (teller) operates independently

Need synchronization mechanism by message passing or mutual exclusion (locks)

Best suited for large-grained problems

Less than data-flow parallelism

28

Important Laws of || Computing.

29

30

31

32

33

34

35

36

Important Consequences

f=0 when no serial part S(n)=n perfect speedup.

f=1 when everything is serial S(n)=1 no parallel code.

fn

nnS

)1(1)(

37

Important Consequences

S(n) is increasing when n is increasing

S(n) is decreasing when f is increasing.

fn

nnS

)1(1)(

38

Important Consequences

no matter how many processors are being used the speedup cannot increase above

Examples: f = 5% S(n) < 20 f = 10% S(n) < 10 f = 20% S(n) < 5.

ffn

nnS

1

)1(1)(

39

40

41

42

Gustafson’s Law - More

43

Gustafson’s Speed-up

pnsT

TpnTs

TimeParallel

TimeSequentialnS

)(

snnsnsnS )1()1()( When s+p=1

Important Consequences:

1) S(n) is increasing when n is increasing

2) S(n) is decreasing when n is increasing

3) There is no upper bound for the speedup.

44

To read:

1. John L. Gustafson, Re-evaluating Amdahl's Law,

http://www.scl.ameslab.gov/Publications/Gus/AmdahlsLaw/Amdahls.html

2. Yuan Shi, Re-evaluating Amdahl's and Gustafson’s Laws,

http://www.cis.temple.edu/~shi/docs/amdahl/amdahl.html

3. Wilkinson’s book,

1. sections of the laws of parallel computing

2. sections about types of parallel machines and compuation

Recommended