34
LECTURE 7 DR. SAMMAN H. AMEEN Message Passing Models and Multicomputer distributed system 1

Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

LECTURE 7

DR. SAMMAN H. AMEEN

Message Passing Models

and

Multicomputer distributed

system

1

Page 2: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 2

Message-passing

direct network

interconnection

Node Node Node Node

Node

Node

Node

Node

Node Node Node Node

Page 3: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 3

• Computers that rely on message passing for communication rather than cache coherent shared

memory are much easier for hardware designers to build

• There is an advantage for programmers as well, in that communication is explicit, which means there are

fewer performance surprises than with the implicit communication

• The downside for programmers is that it's harder to port a sequential program to a message- passing

computer, since every communication must be identified in advance or the program doesn't work.

• Cache-coherent shared memory allows the hardware to figure out what data needs to be communicated,

which makes porting easier. There are differences of opinion as to which is the shortest path to high

performance, given the pros and cons of implicit communication, but there is no confusion in the

marketplace today. Multicore microprocessors use shared physical memory and nodes of a cluster

communicate with each other using message passing.

Page 4: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 4

• Advantages

• Easier to build than scalable shared memory machines

• Easy to scale (but topology is important)

• Programming model more removed from basic hardware operations

• Coherency and synchronization is the responsibility of the user, so the system designer need not worry about them.

• Disadvantages

• Large overhead: copying of buffers requires large data transfers (this will kill the benefits of multiprocessing, if not kept to a minimum).

• Programming is more difficult.

• Blocking nature of SEND/RECEIVE can cause increased latency and deadlock issues.

Page 5: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 5

• Basic communication constructs/ primitives

• Send(destination, message)

• Receive(source, message)

• Here, source/destination is process name or link mailbox or port

• Issues in message passing communication

• Direct Vs. Indirect

• Blocking Vs. Non‐blocking

• Reliable Vs. Unreliable

• Buffered Vs. Unbuffered

Page 6: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 6

• Unique process name

• This can be done by concatenating host name and process id to identify the source/destination

• Note that the format for both send and receive primitives are symmetric

• There is a bidirectional communication between source and destination. That means, for every send primitive there is a receive primitive

A B

Page 7: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 7

p2@machine2

Format is Symmetric,

.

.

Send(p2@machine2, buffer[])....

Communication is bidirectional (forevery Send/Recv, there is Recv/Send

command)

p1@machine2

Recv(p1@machine1, buff[])

.

.

.

.

Page 8: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 8

• Receive message from unknown source

• Source in Receive primitive is a variable which gets the value (sender id) when the message is received.

• Asymmetric sender and receive primitive. [Only sender need to specify the receiver id and receiver may receive the message from any sender]

• Unidirectional path between sender and receiver [No waiting for specific sender by receiver]

C B

A

Page 9: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 9

.

.

.

.

.

.variable = p1@machine1, on receipt ofmessage from p1@machine1variable = p2@machine2 on receipt ofmessage from p2@machine2variable = p3@machine3 on receipt ofmessage from p3@machine3.

.

Page 10: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 10

• This type of communication is used when sending processes are not concerned with the identity of receiving processes and receiving processes are interested only in the message and not the identity of sending processes.

• Example: Multiple clients may request services from one of the multiple servers.

Network

Clients Servers

Any client request may be served by any available server. So sender (client)and receiver (server) concerned onlywith data and not id of each other

Page 11: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 11

Process

Kernel

Network

When sending the message from source to destinationprocess, the message is passed to sender kernel, whichis then pass over communication network, then toreceiver kernel; finally to the destination process

Page 12: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 12

• While sending the message, synchronization can take place at one or more of the following points• Between Sender and Sending kernel

• Between Sending kernel and Receiving kernel

• Between Receiving kernel and Receiver

• Based on above synchronization points, we have different kind of semantics under send and receive primitives.

• Buffering space is present in the sender’s kernel, receiver’s kernel and in the communication network. These can be logically combined into a single buffer.

Page 13: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 13

• Blocking receive:

• Receive has only one semantic, that is the process has to wait till a message is received by the process

RECEIVE

.Recv(......)

.

.

Se

n

d

Page 14: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 14

• Non blocking:

• The sender process is released after the message has been composed and copied onto sender’s kernel

• This is asynchronous send

• Buffer space is assumed to be unbound

• The sender process is blocked if the sending kernel is not ready to accept the message

SEND

Page 15: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 15

Blocking send:

– Sender process is released after the message has been transmitted into the communication network

Reliable blocking send:

– Sender process is released after the message has been received by the destination kernel

Explicit blocking send:

– Sender process is released after the message has been received by the destination (receiver) process

Request and reply:

– Sender process is released after the message has been received by the receiver and response is returned to the sender

– Also called Client‐Server communication

SEND

Page 16: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 16

SEND

Sending Process Receiving Process

SendingKernel

ReceivingKernel

Network

SENDER IS BLOCKED

Page 17: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 17

SEND

SendingKernel

ReceivingKernel

Network

SENDER IS BLOCKED

NON BLOCKING SEND (till message reaches S kernel)

Sending Process Receiving Process

Page 18: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 18

SEND

SendingKernel

ReceivingKernel

Network

SENDER IS BLOCKED

BLOCKING SEND (till message reaches n/w)

Sending Process Receiving Process

Page 19: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 19

SEND

SendingKernel

ReceivingKernel

Network

SENDER IS BLOCKED

RELIABLE BLOCKING SEND (till message reaches R kernel)

Sending Process Receiving Process

Page 20: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 20

SEND

SendingKernel

ReceivingKernel

Network

SENDER IS BLOCKED

Sending Process Receiving Process

EXPLICIT BLOCKING SEND (till message reaches receiver)

Page 21: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 21

SEND

SendingKernel

ReceivingKernel

Network

SENDER IS BLOCKED

Sending Process Receiving Process

EXPLICIT BLOCKING SEND (till message reaches receiver)NON BLOCKING SEND (till message reaches S kernel)BLOCKING SEND (till message reaches n/w)RELIABLE BLOCKING SEND (till message reaches R kernel)

endi ern

ving el

SK

EXPLICIT BLOCKING SEND (till message reaches receiver)

ngel

ReceiKern

Network

SENDER IS BLOCKED

REQUEST AND REPLY (till response from receiver reaches back sender)

Sending Process Receiving Process

Page 22: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 22

• It is the communication endpoint of communication link and is managed by the transport services.

• Sockets are used for communication between processes in heterogeneous domain.

• It is established by socket system call, which returns a descriptor and informs the kernel about the protocol (TCP/UDP) the process is going to use.

• The system call is described by – (protocol, local address, local process/port, foreign address, foreign process/port)

Page 23: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 23

SERVER

Socket Socket ProtocolProtocol

Local address+ port

To the Local address +port given by the systemBind() Connect()

Ready toacceptconnection

Listen()

Accept()Block untilconnectionfrom client

Data(Request)

Read() Write()

Processrequest Write() Read()

Data(Reply)

CLIENT

Page 24: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 24

Socket SocketProtocol

Bind() Bind()Block untildata receivedfrom theclient Receive From() Send To()

Send To() Receive From()

SERVER CLIENT

Page 25: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 25

• Idea is group of processes receive messages from source• If group size = N, communication is broadcasting

• Else (group size < N) communication is multicasting

• N : number of processes in a DS

• The source may belong to the same group of recipients

• The source may or may not belong to the same group of recipients

Page 26: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 26

• Unanimity:

• A message is delivered either to all processes in the group or not at all (atomic property )

• Termination• The result of a message submission to the broadcast server is known to

the submitted broadcast server within a finite time period

• Uniformity:

• If a message is delivered to a process in a group, then it should be delivered to all the processes in that group

• Order:• BS will deliver the messages to APs in the group based on some

predefined ordering requirements

Page 27: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 27

• Wide area network (WAN); A WAN connects a large number of computers that are spread over large geographic distances. It can span sites in multiple cities, countries, and continents.

• Metropolitan area network (MAN); The MAN is an intermediate level between the LAN and WAN and can perhaps span a single city.

• Local area network (LAN); A LAN connects a small number of computers in a small area within a building or campus.

• System or storage area network (SAN). A SAN connects computers or storage devices to make a single system.

Page 28: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 28

• A network channel c=(x,y) is characterized by

• width wc: the number of parallel signals it contains,

• frequency fc: the rate at which bits are transported at each signal

• latency tc is the time required for a bit to travel from x to y.

• A bandwidth of a channel is W= wc * fc.

• The throughput Θ of a network is the data rate in bits per second that network accepts per input port.

• Under a particular traffic pattern, the channel that carries the largest fraction of the traffic determines the maximum channel load γ. Load on the channel can be equal or smaller than channel bandwidth.

• Θ=W/γ

Page 29: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 29

• Deterministic: The simplest algorithm - for each source, destination pair, there is a single path. This routing algorithm usually achieves poor performance because it fails to use alternative routes, and concentrates traffic on only one set of channels.

• Oblivious: So named because it ignores the state of the network when determining a path. Unlike deterministic, it considers a set of paths from a source to a destination, and chooses between them.

• Adaptive: The routing algorithm changes based on the state of the network.

Page 30: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 30

• Message: logical unit for internode communication

• Packet: basic unit containing destination address for routing

• Packets have sequencing # for reassembly

• Flits: flow control digits of packets

Page 31: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 31

• Header flits contain routing information and sequence number

• Flit ( FLow control digITs) length affected by network size

• Packet length determined by routing scheme and network implementation

• Lengths also dependent on channel b/w, router design, network traffic, etc.

Page 32: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 32

• 100 Gigabit Ethernet (100GbE) and 40 Gigabit Ethernet (40GbE) are groups of computer networking technologies for transmitting Ethernet frames at rates of 100 and 40 gigabits per second (100 and 40 Gbit/s), respectively. The technology was first defined by the IEEE 802.3ba-2010 standard.

• InfiniBand (abbreviated IB) is a computer network communications link used in high-performance computing featuring very high throughput and very low latency. It is used for data interconnect both among and within computers. As of 2014 it is the most commonly used interconnect in supercomputers.

Page 33: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 33

• Latency is an element that contributes to network speed.

• The term latency refers to any kind of delay typically incurred in processing of network data.

• A low latency connection is one that generally experiences small delay times, while a high latency connection generally suffers from long delays.

Page 34: Message Passing Models and Multicomputer distributed system 2014... · PAGE 3 • Computers that rely on message passing for communication rather than cache coherent shared memory

PAGE 34

In the context of parallel computing, granularity is the

ratio of communication time over computation time.

Fine grain parallelism is characterized by seemingly

more communications as the relative computation time

is shorter. Coarse grain parallelism, then, is

characterized by seemingly fewer communications with

much longer computation time.

Load balance is easier to achieve with fine grain parallelism

because small tasks depend less on the operating system,

interrupts and so on. Coarse grain parallelism, on the converse,

makes it harder to predict when any given task will terminate,

therefore making it harder to assign tasks for optimal usage of

the multiple processors.

Fine grain parallelism requires more synchronization overhead

due to the need to communicate data and synchronize tasks

among processors. Therefore, the fewer communications in

coarse grain parallelism reduces overhead.