31
Interprocess Communication(IPC)

IPC

Embed Size (px)

Citation preview

Interprocess Communication(IPC)

Interprocess Communication

• Processes within a system may be independent or cooperating!• Cooperating process can affect or be affected by other processes,

including sharing data.• Reasons for cooperating processes:• Information sharing• Computation speedup• Modularity• Convenience

• Cooperating processes need interprocess communication(IPC)• Two models of IPC:• Shared memory• Message passing

Communications Models

Cooperating Processes• Independent process cannot affect or be affected by the execution of

another process• Cooperating process can affect or be affected by the execution of another process• Advantages of process cooperation:• Information sharing • Computation speed-up• Modularity• Convenience

Interprocess Communication – Message Passing• Mechanism for processes to communicate and to synchronize their actions• Message system – processes communicate with each other without

resorting to shared variables• IPC facility provides two operations:• send(message) – message size fixed or variable • receive(message)

• If P and Q wish to communicate, they need to:• establish a communication link between them• exchange messages via send/receive

• Implementation of communication link• physical (e.g., shared memory, hardware bus)• logical (e.g., logical properties)

Direct Communication

• Processes must name each other explicitly:• send (P, message) – send a message to process P• receive(Q, message) – receive a message from process Q

• Properties of communication link• Links are established automatically• A link is associated with exactly one pair of communicating processes• Between each pair there exists exactly one link• The link may be unidirectional, but is usually bi-directional

Indirect Communication

• Messages are directed and received from mailboxes (also referred to as ports)• Each mailbox has a unique id• Processes can communicate only if they share a mailbox

• Properties of communication link• Link established only if processes share a common mailbox• A link may be associated with many processes• Each pair of processes may share several communication links• Link may be unidirectional or bi-directional

Indirect Communication

• Operations• create a new mailbox• send and receive messages through mailbox• destroy a mailbox

• Primitives are defined as:• send(A, message) – send a message to mailbox A• receive(A, message) – receive a message from mailbox A

Indirect Communication• Mailbox sharing• P1, P2, and P3 share mailbox A• P1, sends; P2 and P3 receive• Who gets the message?

• Solutions:• Allow a link to be associated with at most two processes• Allow only one process at a time to execute a receive operation• Allow the system to select arbitrarily the receiver. Sender is notified who the

receiver was.

Synchronization

• Message passing may be either blocking or non-blocking• Blocking is considered synchronous!• Blocking send has the sender block until the message is received• Blocking receive has the receiver block until a message is available

• Non-blocking is considered asynchronous!• Non-blocking send has the sender send the message and continue• Non-blocking receive has the receiver receive a valid message or null"

Buffering• Queue of messages attached to the link; implemented in one of

three ways1. Zero capacity – 0 messagesSender must wait for receiver (rendezvous)2. Bounded capacity – finite length of n messagesSender must wait if link full"3. Unbounded capacity – infinite length• Sender never waits

Examples of IPC Systems - POSIX• POSIX Shared Memory• Process first creates shared memory segment

segment id = shmget(IPC PRIVATE, size, SIRUSR | SIWUSR);

• Process wanting access to that shared memory must attach to it"• shared memory = (char *) shmat(id, NULL, 0);

• Now the process could write to the shared memory"• sprintf(shared memory, "Writing to shared memory");

• When done a process can detach the shared memory from its address space• shmdt(shared memory);

Examples of IPC Systems – Windows XP• Message-passing centric via local procedure call (LPC) facility• Only works between processes on the same system"• Uses ports (like mailboxes) to establish and maintain communication channels

• Communication works as follows:• The client opens a handle to the subsystem’s connection port object• The client sends a connection request• The server creates two private communication ports and returns the handle to one

of them to the client• The client and server use the corresponding port handle to send messages or

callbacks and to listen for replies

Local Procedure Calls in Windows XP

Signal

• Signal is an IPC used for signaling from a process A to OS to enable start of another process B.• Signal is a one or two byte IPC from a process to the OS.• Signal provides the shortest communication.• The signal ( ) sends a one-bit output for a process, which unmasks a signal mask

of a process or task (called signal handler)• The handler has coding similar to ones in an ISR runs in a way similar to a

highest priority ISR.• An ISR runs on an hardware interrupt provided that interrupt is no masked. The

signal handler also runs on signal provided that signal is no masked.• When the IPC functions for signal are not provided by an OS, then the OS

employs semaphore for the same purpose.

Signal function

Signal function

• Task i sending signal s to initiate signal handler ISR j

PIPE• Pipes are a data transfer, byte stream IPC facility that connect processes;

the byte stream written to one end of the pipe can be read from the other.

• once created, pipes are referenced by file descriptor handles#include <unistd.h>

int pipe(int filedes[2]);• filedes[0] is open for reading (read-end),• filedes[1] is open for writing (write-end)• The output of filedes[1] is the input of filedes[0]

• pipes are half-duplex

Pipes — intuition

• every read from a pipe copy from kernel space to user space• every write to a pipe copy from user space to kernel space

Semaphores

• A semaphore is a protected variable whose value can be accessed and altered only by the operations P(wait)and V(signal)and initialization operation.• wait() was called P (for Dutch “Proberen” meaning to test) and signal() was

called V (for Dutch “Verhogen” meaning to increment).

• Types of Semaphore:1. Binary Semaphore

• Binary semaphores have 2 methods associated with it. (lock, unlock)• Binary semaphores can take only 2 values (0/1). They are used to acquire locks. When a

resource is available, the process in charge set the semaphore to 1 else 0.

Semaphores2. Counting semaphores

• Counting Semaphore may have value to be greater than one, typically used to allocate resources from a pool of identical resources

3.Mutex Semaphore• A mutual exclusion (mutex) semaphore is a special binary semaphore that supports

ownership, recursive access, task deletion safety.• Is a key to a toilet. One person can have the key - occupy the toilet - at the time. When

finished, the person gives (frees) the key to the next person in the queue.

Semaphores: Access• wait: If the value of semaphore variable is not negative, decrements it

by 1. If the semaphore variable is now negative, the process executing wait is blocked (i.e., added to the semaphore's queue) until the value is greater or equal to 1. Otherwise, the process continues execution, having used a unit of the resource.• signal: Increments the value of semaphore variable by 1. After the

increment, if the pre-increment value was negative (meaning there are processes waiting for a resource), it transfers a blocked process from the semaphore's waiting queue to the ready queue.

Semaphores: Access wait (semaphore s){while(s==0); //wait until s>0;s=s-1;}signal (semaphore s){s=s+1;}

Sockets• A socket is defined as an endpoint for communication• Concatenation of IP address and port• The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8• Communication consists between a pair of sockets

Shared Memory• Shared memory allows one or more processes to communicate via

memory that appears in all of their virtual address spaces.

Shared Memory with Mutex

• In this design pattern, task #1 and task #2 access shared memory using a mutex for synchronization. • Each task must first acquire the mutex before accessing the shared

memory. The task blocks if the mutex is already locked, indicating that another task is accessing the shared memory. The task releases the mutex after it completes its operation on the shared memory.

Message Queues• Message queues allow one or more processes to write messages that will be read

by one or more reading processes.

Message Queues: Operation• Kernel job to assign a unique ID to a message queue and to create its QCB and

task-waiting list. The kernel also takes developer-supplied parameters such as the length of the queue and the maximum message length to determine how much memory is required for the message queue. After the kernel has this information, it allocates memory for the message queue from either a pool of system memory or some private memory space.• The message queue itself consists of a number of elements, each of which can

hold a single message. The elements holding the first and last messages are called the head and tail respectively.• a message queue has two associated task-waiting lists. The receiving task-waiting

list consists of tasks that wait on the queue when it is empty. The sending list consists of tasks that wait on the queue when it is full. Empty and full message-queue states, as well as other key concepts

Deadlock

• A deadlock is a situation in which two or more competing actions are each waiting for the other to finish, and thus neither ever does.

Deadlock Characterization

Deadlock can arise if four conditions hold simultaneously :• Mutual exclusion: only one process at a time can use a resource.• Hold and wait: a process holding at least one resource is waiting to acquire

additional resources held by other processes.• No preemption: a resource can be released only voluntarily by the process

holding it, after that process has completed its task.• Circular wait: there exists a set {P0, P1, …, Pn} of waiting processes such that

P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0