View
216
Download
3
Category
Preview:
Citation preview
Modified by Dr M A Berbar
Chapter 6: Process Synchronization
Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization in Solaris 2 & Windows 2000
Modified by Dr M A Berbar
Background
Concurrent access to shared data may result in data inconsistency.
Maintaining data consistency requires mechanisms to ensure the
orderly execution of cooperating processes.
Shared-memory solution to bounded-buffer problem allows at most n
– 1 items in buffer at the same time. A solution, where all N buffers
are used is not simple.
Suppose that we modify the producer-consumer code by adding a
variable counter, Initially, count is set to 0. It is incremented by the
producer after it produces a new buffer and is decremented by the
consumer after it consumes a buffer.
Modified by Dr M A Berbar
Background: process synchronization
Concurrent access to shared data may result in data inconsistency.
Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes.
Shared-memory solution to bounded-buffer problem (Chapter 3) allows at most n – 1 locations in buffer at the same time. A solution, where all n locations are used is not simple.
Problem:
Solution:
Modified by Dr M A Berbar
Bounded-Buffer – Producer Process from ch3
item nextProduced;
while (1) {while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing; buffer is full */buffer[in] = nextProduced;in = (in + 1) % BUFFER_SIZE;
}
0
11
011
producer
consumer
0
11
out
in
in
out
Empty bufferFull buffer
Modified by Dr M A Berbar
Bounded-Buffer Suppose that we modify the producer-consumer code by adding a variable
counter, initialized to 0 and incremented each time a new item is added to the buffer
Shared data:
#define BUFFER_SIZE 10
typedef struct {
. . .
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
int counter = 0; /* # of items in buffer */
New variable
Modified by Dr M A Berbar
Bounded-Buffer
Producer process
item nextProduced;
while (1) {
while (counter == BUFFER_SIZE)
; /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE; // -1 < in < BUFFER_SIZE
counter++;
}
Modified by Dr M A Berbar
Bounded-Buffer
Consumer process
item nextConsumed;
while (1) {while (counter == 0)
; /* do nothing because nothing in the buffer */
nextConsumed = buffer[out];out = (out + 1) % BUFFER_SIZE;counter--;
}
Modified by Dr M A Berbar
Bounded-Buffer
Producer - Consumer
Producer process
item nextProduced;
while (1) {
while (counter == BUFFER_SIZE)
; /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
Consumer process
item nextConsumed;
while (1) {while (counter == 0)
; /* do nothing */nextConsumed = buffer[out];out = (out + 1) %
BUFFER_SIZE;counter--;
}
producer
consumer
0
11
out
in
Modified by Dr M A Berbar
Bounded Buffer
The statements
counter++;counter--;
must be performed atomically.
Atomic operation means an operation that completes in its execution without interruption.
Race Condition
Modified by Dr M A Berbar
Bounded Buffer
The statement “count++” may be implemented in machine language as:
register1 = counter
register1 = register1 + 1counter = register1
The statement “count--” may be implemented as:
register2 = counterregister2 = register2 – 1counter = register2
Race Condition
Modified by Dr M A Berbar
Bounded Buffer
If both the producer and consumer attempt to update the buffer
concurrently, the assembly language statements may get
interleaved.
Interleaving depends upon how the producer and consumer
processes are scheduled.
Race Condition
Modified by Dr M A Berbar
Bounded Buffer
Assume counter is initially 5. One interleaving of statements is:
To: producer: execute register1 = counter (register1 = 5)T1: producer: execute register1 = register1 + 1 (register1 = 6)T2: consumer: execute register2 = counter (register2 = 5)T3: consumer: execute register2 = register2 – 1 (register2 = 4)
T4: producer: execute counter = register1 (counter = 6)
T5: consumer: execute counter = register2 (counter = 4)
The value of count may be either 4 or 6, where the correct result should be 5.
Race Condition
Modified by Dr M A Berbar
Race Condition
Race condition: The situation where several processes access – and
manipulate shared data concurrently. The final value of the shared
data depends upon which process finishes last.
To prevent race conditions, concurrent processes must be
synchronized.
Modified by Dr M A Berbar
The Critical-Section Problem
n processes all competing to use some shared data
Each process has a code segment, called critical
section, in which the shared data is accessed.
Problem – ensure that when one process is executing in
its critical section, no other process is allowed to execute
in its critical section.
Modified by Dr M A Berbar
Initial Attempts to Solve Problem
Only 2 processes, P0 and P1
General structure of process Pi (other process Pj)
do {
entry section
critical section
exit section
reminder section
} while (1);
Processes may share some common variables to synchronize their actions.
Requesting permission to enter its critical section
Modified by Dr M A Berbar
Solution to Critical-Section Problem
1. Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections.
2. Progress. If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely.
3. Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.Assume that each process executes at a nonzero speed No assumption concerning relative speed of the n
processes.
Modified by Dr M A Berbar
Algorithm 1 Shared variables:
int turn;initially turn = 0
turn == i Pi can enter its critical section
Process Pi
do {
while (turn != i) /* do nothing */ ;
critical section /* turn == i */
turn = j ; /* give permission for process j*/
/* and prevent Pi to enter CS even Pj is not ready to goto its CS */
reminder section
} while (1);
Modified by Dr M A Berbar
Algorithm 1 Satisfies mutual exclusion, but not progress …
Satisfies mutual exclusion as this solution ensures that only one process at a time can be in its critical section.
Not satisfies progress
For example:
if turn == 0 and P1 is ready to enter its critical section again , P1 cannot do so, even though P0 may be in its remainder section.
The problem with algorithm 1 is that it does not retain sufficient information about the state of each process; it remembers only which process is allowed to enter its critical section.
Modified by Dr M A Berbar
Algorithm 2 Shared variables
boolean flag[2];initially flag [0] = flag [1] = false.
flag [i] = true Pi ready to enter its critical section
Process Pi
do {
flag[i] := true;while (flag[j]) /* do nothing */ ;
critical section
flag [i] = false;
remainder section
} while (1);
Satisfies mutual exclusion, but not progress requirement.
Modified by Dr M A Berbar
Algorithm2 Satisfies mutual exclusion, but not progress requirement.
For example:
To : Po sets flag[0] = true
T1 : P1 sets flag[1] = true
Now Po and P1 are looping for ever in their respective while statements.
This algorithm is crucially dependent on exact timing of the two processes.
Modified by Dr M A Berbar
Peterson's solution
Combined shared variables of algorithms 1 and 2. boolean flag[2]; initially flag [0] = flag [1] = false. Int turn;
Process Pi
do {flag [i] := true;turn = j;while (flag [j] and turn == j) /* do nothing */ ;
critical section /* The condition of while is
false flag [i] = false;remainder section
} while (1);
Meets all three requirements; solves the critical-section problem for two processes.
To prove property 1: Pi enters its critical section only if either flag[j] == false or turn == i “
the condition of while is false” What about the problem of
To : Po sets flag[0] = true
T1 : P1 sets flag[1] = true
Variable turn can be either 0 or 1, but cannot be both.
Property 2 is satisfied because Po and P1 are not looping for ever in their respective while statements, and it does not prevent the last executed process to enter again its CS as long as the other process is not ready to go to its CS
Property 3 is satisfied because if process Pi finished its CS and Pj has made a request to enter its critical section and that request is granted, Pi not allowed to go again to its CS.
Modified by Dr M A Berbar
Multiple Process SolutionBakery Algorithm
Before entering its critical section, process receives a number. Holder of the smallest number enters the critical section.
If processes Pi and Pj receive the same number, if i < j, then Pi is served first; else Pj is served first.
The numbering scheme always generates numbers in increasing order of enumeration; i.e., 1,2,3,3,3,3,4,5...
Critical section for n processes
For Reading
Modified by Dr M A Berbar
Bakery Algorithm
Notation < lexicographical order (ticket #, process id #) (a,b) < c,d) if a < c or if a == c and b < d max (a0,…, an-1) is a number, k,
such that k ai for i - 0, …, n – 1
Shared data
boolean choosing[n];
int number[n];
Data structures are initialized to false and 0 respectively
For Reading
Modified by Dr M A Berbar
Bakery Algorithm
do { choosing[i] = true; /* the process i is ready to enter its critical section
/* then get a number for the process
number[i] = max(number[0], number[1], …, number [n – 1])+1;/* note : number[] is not sorted
choosing[i] = false;for (j = 0; j < n; j++) {
while (choosing[j]) /* do nothing */ ;
while ((number[j] != 0) && ( (number[j], j) < (number[i],i) ) ) ;
}/* arrived here means there is no process have number less than number[i]
critical sectionnumber[i] = 0;
remainder section} while (1);
Continues looping until Pj leaves its critical section
go to check if there is any process want to go its critical section and have number less than Pi
There is a process number j is going to go its critical section
For Reading
Modified by Dr M A Berbar
What is missing?
“Real” solutions are based on stronger prerequisites than just
variable sharing. Besides, we don’t want to guess what is
“atomic” when programming. So, we want:
• Hardware support—special instructions.
• OS kernel provided synchronization primitives.
At the lowest level (hardware), there are two elementary
mechanisms:
– interrupt masking
– atomic read-modify-write
For Reading
Modified by Dr M A Berbar
A simple minded alternative
Let’s start by using hardware instructions to mask interrupts. If
we don’t let the CPU interrupt (i.e., take away the control from)
the current process, the solution for N processes would be as
simple as below:
Process i ...
while( TRUE ) {
disableInterrupts();
< critical section >
enableInterrupts();
... }
Unfortunately, there is only one system-wide critical section
active at a time. Why? For Reading
Modified by Dr M A Berbar
Disadvantages of masking the interrupts
Masking the interrupts is not feasible in a multiprocessor environment.
Disabling interrupts on a multiprocessors can be time-consuming, as the message is passed to all the processors.
This message passing delays entry into each critical section, and system efficiency decreases.
For Reading
Modified by Dr M A Berbar
Hardware support Many CPUs today provide hardware instructions to read, modify,
and store a word atomically. Most common instructions with this
capability are:
– TAS - test-and-set (Motorola 68K)
– CAS - compare-and-swap (IBM 370 and Motorola 68K)
– XCHG - exchange (x86)
The basic idea is to be able to read out the contents of a variable
(i.e., memory location), and set it to something else all in one
execution cycle. Hence not interruptable.
The use of these special instructions makes life easier!
Solution to Critical-section Problem Using Locks
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
Modified by Dr M A Berbar
Synchronization Hardware Test and modify the content of a word atomically.// Test And Set function TASet
boolean TestAndSet (boolean *target) { boolean rv = *target; *target = TRUE; return rv: }
Return the current value of global lock variable and
set the global lock variable to true
atomically.
Solution using TestAndSet Shared boolean variable lock., initialized to false. Solution:
do { while ( TestAndSet (&lock )) ; // do nothing
// critical section
lock = FALSE;
// remainder section
} while (TRUE); Only one global guard variable is associated with each critical
section (i.e., there can be many critical sections).
Swap Instruction
Definition:
void Swap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp: }
atomically.
Solution using Swap
Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key
Solution: do { key = TRUE; while ( key == TRUE) Swap (&lock, &key ); // critical section
lock = FALSE;
// remainder section
} while (TRUE);
Modified by Dr M A Berbar
Mutual Exclusion with Swap
Shared data (initialized to false): boolean lock;
boolean waiting[n];
Process Pi
do {key = true;while (key == true)
Swap(lock,key);critical section
lock = false;remainder section
}
Continue looping until setting lock to False
Atomic operation
These algorithms do not satisfy the bounded-waiting requirement
Figure Bounded-waiting mutual exclusion with TASetBoolean waiting[n]; // initialized to falseBoolean lock; // initialized to false to prove that mutual exclusion is
met
do {waiting[i] = true;key = true;while (waiting[i] && key) key = TASet(lock);
waiting[i] = false;
critical section
j = (i+1) % n;while ((j != i) && !waiting[j]) j = (j+1) % n;if (j == i)lock = false;elsewaiting [j] = false;
remainder section} while (1);
Return the current value of global lock variable and set the global lock variable to true
Pi can enter its critical section only if either waiting [i] == false or key == false. The value of key can become false only
if the TASet is executed.
The lock global variable initialized false, so the first process to execute the TASet will set its key to false; and set lock to True so all others must wait.
maintaining the mutual-exclusion requirement.
If waiting[j] == trueor j ==i finished looping
During running critical section other processes trying to enter their critical section
It designates the first process in this ordering that is in the entry section (waiting [j] == true) as the next one to enter the critical section. Any process waiting to enter its critical section will thus do so within n -1 turns.
scans the array waiting in the cyclic ordering
(i + 1, i + 2,..., n - 1,0,..., i -1).
To prove the progress requirement is met, we note that the arguments presented for mutual exclusion also apply here, since a process exiting the critical section either sets lock to false, or sets waiting [j] to false. Both allow a process that is waiting to enter its critical section to proceed.
Bounded-waiting Mutual Exclusion with TestandSet()do {
waiting[i] = TRUE;
key = TRUE;
while (waiting[i] && key)
key = TestAndSet(&lock);
waiting[i] = FALSE;
// critical section
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);
Modified by Dr M A Berbar
Bounded-waiting mutual exclusion with TASet
The first process to execute the TASet will find key = false; and lock True all others must wait (mutual exclusion). The variable waiting [i] can become false only if another process leaves its critical section (progress); only one waiting [i] is set to false, maintaining the mutual-exclusion requirement.
To prove the progress requirement is met, we note that the arguments presented for mutual exclusion also apply here, since a process exiting the critical section either sets lock to false, or sets waiting [j] to false. Both allow a process that is waiting to enter its critical section to proceed.
To prove the bounded-waiting requirement is met, we note that, when a process leaves its critical section, it scans the array waiting in the cyclic ordering (i + 1, i + 2,..., n - 1,0,..., i -1). It designates the first process in this ordering that is in the entry section (waiting [j] == true) as the next one to enter the critical section. Any process waiting to enter its critical section will thus do so within n -1 turns.
Unfortunately for hardware designers, implementing atomic TASet instructions on multiprocessors is not a trivial task. Such implementations are discussed in books on computer architecture.
Modified by Dr M A Berbar
Semaphores
Synchronization tool that does not require busy waiting. Semaphore S – integer variable can only be accessed via two indivisible (atomic) operations Two standard operations modify S: wait() and signal() Less complicated
wait (S){
while S 0 //do no-op;S--;}
signal (S){
S++;}
wait (S);
0 1
0
signal (S);
0
1
Modified by Dr M A Berbar
Critical Section of n Processes Mutual exclusion implementation with semaphores
Shared data:
semaphore mutex; //initially mutex = 1
Also known as mutex locks
Process Pi:
do { wait(mutex);
critical section
signal(mutex);
remainder section
} while (1);
under the classical definition of semaphores with busy waiting the semaphore value is never negative
wait (S);
0 1
0
signal (S);
0
1
Semaphore Implementation
Must guarantee that no two processes can execute wait () and
signal () on the same semaphore at the same time
Thus, implementation becomes the critical section problem
where the wait and signal code are placed in the critical section.
Could now have busy waiting in critical section implementation
But implementation code is short
Little busy waiting if critical section rarely occupied
Note that applications may spend lots of time in critical sections
and therefore this is not a good solution.
Modified by Dr M A Berbar
The main disadvantage of previous algorithms
The main disadvantage of the mutual-exclusion solutions of previous
algorithms, and of the semaphore definition given here, is that they all
require busy waiting.
While a process is in its critical section, any other process that tries to
enter its critical section must loop continuously in the entry code.
This continual looping is clearly a problem in a real multiprogramming
system, where a single CPU is shared among many processes. Busy
waiting wastes CPU cycles that some other process might be able to
use productively.
This type of semaphore is also called a spinlock (because the process
"spins" while waiting for the lock).!
Modified by Dr M A Berbar
When the Spinlocks will be useful ?!!
Spinlocks are useful in multiprocessor systems.
The advantage of a spinlock is that no context switch is required
when a process must wait on a lock, and a context switch may
take considerable time.
Thus, when locks are expected to be held for short times,
spinlocks are useful.
Modified by Dr M A Berbar
To overcome the need for busy waiting
When a process executes the wait operation and finds that the semaphore value is not positive, it must wait. However, rather than busy waiting, the process can block itself…how ?
The block operation places a process into a waiting queue associated with the semaphore, and the state of the process is switched to the waiting state. Then, control is transferred to the CPU scheduler, which selects another process to execute.
A process that is blocked, waiting on a semaphore S, should be restarted when some other process executes a signal operation at the end of its critical section.
The process is restarted by a wakeup operation, which changes the process from the waiting state to the ready state. The process is then placed in the ready queue.
(The CPU may or may not be switched from the running process to the newly ready process, depending on the CPU-scheduling algorithm.)
Modified by Dr M A Berbar
Busy-wait Problem
Definition : Processes waste CPU cycles while waiting to enter their critical sections
Solution:
1. Modify wait operation into the block operation. The process can block itself rather than busy-waiting.
2. Place the process into a wait queue associated with the critical section
3. Modify signal operation into the wakeup operation.
4. Change the state of the process from wait to block.
Semaphore Implementation with no Busy waiting With each semaphore there is an associated waiting queue. Each
entry in a waiting queue has two data items:
value (of type integer) pointer to next record in the list
Two operations: block – place the process invoking the operation on the
appropriate waiting queue. (suspends the process that invokes it.)
wakeup – remove one of processes in the waiting queue and place it in the ready queue.
wakeup(P) resumes the execution of a blocked process P.
Modified by Dr M A Berbar
Implementation
Each semaphore has an integer value and a list of processes. When a
process must wait on a semaphore, it is added to the list of processes.
A signal operation removes one process from the list of waiting
processes and awakens that process.
Shockwave Flash Object
Semaphore Implementation with no Busy waiting (Cont.)
Implementation of wait: wait(semaphore *S) {
S->value--; if (S->value < 0) {
add this process to S->list; block();
} }
Implementation of signal:
signal(semaphore *S) { S->value++; if (S->value <= 0) {
remove a process P from S->list; wakeup(P);
}}
Note 1
Semaphore waiting list
this implementation may have negative semaphore values. If the semaphore value is negative, its magnitude is the number of processes waiting on that semaphore.
Shockwave Flash Object
Modified by Dr M A Berbar
Each semaphore contains an integer value and a pointer to a list of PCBs. One way to add and remove processes from the list, that ensures bounded waiting would be to use a FIFO queue, where the semaphore contains both head and tail pointers to the queue
Note 1
The critical aspect of semaphores is that wait and signal operations are executed atomically.
This situation can be solved in either of two ways.
In a uni-processor environment, we can simply inhibit interrupts during the time the wait and signal operations are executing, instructions from different processes cannot be interleaved, until interrupts are re-enabled and the scheduler can regain control.
In a multiprocessor environment, inhibiting interrupts does not work. Instructions from different processes (running on different processors) may be interleaved in some arbitrary way. If the hardware does not provide any special instructions, we can employ any of the correct software solutions for the critical-section problem, where the critical sections consist of the wait and signal procedures
Modified by Dr M A Berbar
Shockwave Flash Object
Modified by Dr M A Berbar
Semaphore as a General Synchronization Tool
Execute B in Pj only after A executed in Pi
Use semaphore flag initialized to 0 Code:
Pi Pj
A wait(flag)
signal(flag) B
Modified by Dr M A Berbar
We have removed busy waiting from the entry to the critical sections of application programs. Furthermore, we have limited busy waiting to only the critical sections of the wait and signal operations, and these sections are short (if properly coded, they should be no more than about 10 instructions).
busy waiting occurs rarely, and then for only a short time. In some application programs their critical sections may be long (minutes or even hours)
Modified by Dr M A Berbar
Deadlock and StarvationDeadlock – two or more processes are waiting indefinitely for an event
that can be caused by only one of the waiting processes. Let we consider a system consisting of two processes, Po and P1,
each accessing two semaphores, S and Q, set to the value1
P0 P1
wait(S); wait(Q); wait(Q);wait(S); signal(S); signal(Q);signal(Q) signal(S);
Suppose that Po executes wait (S), and then P1 executes wait (Q). When Po executes wait(Q), it must wait until P1 executes signal(Q). Similarly, when P1 executes wait(S), it must wait until Po executes signal(S). Since these signal operations cannot be executed, Po and P1 are deadlocked.
Modified by Dr M A Berbar
Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.
Another problem related to deadlocks is indefinite blocking or starvation, a situation where processes wait indefinitely within the semaphore.
Priority Inversion - Scheduling problem when lower-priority process holds a lock needed by higher-priority process
Classical Problems of Synchronization
Bounded-Buffer Problem
Readers and Writers Problem
Dining-Philosophers Problem
Bounded-Buffer Problem Shared data
The mutex semaphore provides mutual execlusion for accesses to the buffer pool and is initialzed to the value 1.
The empty and full semaphores count the number of empty and full buffers.
Initially:
full = 0, empty = n, mutex = 1
Producer/consumer Processes
do { … produce an item in nextp … wait(empty); wait(mutex); … add nextp to buffer … signal(mutex); signal(full);} while (1);
do { wait(full)wait(mutex); …remove an item from buffer to nextc …signal(mutex);signal(empty); …consume the item in nextc …
} while (1);
Producer Consumer
Initially: full = 0, empty = n, mutex = 1
wait(semaphore *S) { S->value--; if (S->value < 0) {
add this process to S->list; block();
} }
signal(semaphore *S) { S->value++; if (S->value <= 0) {
remove a process from S->list;wakeup(P);
}}
Producer/consumer Processes
do { … produce an item in nextp … wait(empty); wait(mutex); … add nextp to buffer … signal(mutex); signal(full);} while (1);
do { wait(full)wait(mutex); …remove an item from buffer to nextc …signal(mutex);signal(empty); …consume the item in nextc …
} while (1);
Producer Consumer
Initially: full = 0, empty = n, mutex = 1
empty --;If empty +ve there are empty places;If empty < 0 then the buffer if full and
go to wait queue of empty semaphoremutex --;If mutex ==0 no problem;If mutex < 0 then the buffer is used by the consumer then go to wait queue of mutex semaphore
increment full because an item has been added
full --;
If full is +ve then there are items
else wait because there is no items in the buffer
mutex --;If mutex == 0 no problem;If mutex < 0 then the buffer is used by the producer andgo to wait queue of mutex semaphore
wait(semaphore *S) { S->value--; if (S->value < 0) {
add this process to S->list; block();
} }
signal(semaphore *S) { S->value++; if (S->value <= 0) {
remove a process from S->list;wakeup(P);
}}
Readers-Writers Problem
Readers-Writers Problem The first readers-writers problem, requires
that no reader will be kept waiting unless a writer has already obtained permission to use the shared object. All readers waiting at the end of writer execution should be given priority over the next writer.
The second readers-writers problem requires that, if a writer is waiting to access the object, no new readers may start reading.
A solution to either problem may result in starvation. In the first case, writers may starve; in the second case, readers may starve.
Readers-Writers Problem
Shared data
semaphore mutex, wrt;Int readCount;
Initially
mutex = 1, wrt = 1, readCount = 0
Readers-Writers Problem
wait(mutex);readCount++;if (readCount == 1)
wait(wrt);signal(mutex);
… reading is performed
…wait(mutex);readCount--;if (readCount == 0)
signal(wrt);signal(mutex):
wait(wrt); …
writing is performed …
signal(wrt);mutex --;If mutex ==0 no problem;If mutex < 0 then there is a reader updating the readCount so go to wait queue of mutex semaphore
Initially mutex = 1, wrt = 1, readCount = 0
The mutex semaphore is used to ensure mutual exclusion when the variable readcount is updated.
The readCount variable keeps track of how many processes are currently reading the object.
If first or last reader then wrt --;If wrt ==0 no writer working;If wrt < 0 then the buffer is used by the writer and go to wait queue of wrt semaphore. if a writer is in the critical section and n readers are waiting, then one reader is queued on wrt, and n - 1 readers are queued on mutex.
mutex --;If mutex ==0 no problem;If mutex < 0 then there is a reader updating the readCount so go to wait queue of mutex semaphore
wrt --;If wrt ==0 no problem;If wrt < 0 then the buffer is used by another writer and go to wait queue of wrt semaphore.
The readCount ==1 when the first reader tries to enters the critical section
The readCount ==0 when the last reader tries to exits the critical section
release a reader
release a writer
The semaphore wrt functions
The semaphore wrt functions as a
mutual-exclusion semaphore for the
writers.
It is also used by the first or last
reader that enters or exits the critical
section.
It is not used by readers who enter or
exit while other readers are in their
critical sections.
Dining - Philosophers Problem
Shared data semaphore chopstick[5];
Initially all values are 1
• It is a simple representation of the need to allocate several resources among several processes in a deadlock and starvation free manner
• One simple solution is to represent each chopstick by a semaphore. • A philosopher tries to grab the chopstick by executing a wait operation on that semaphore;• She release her chopstick by executing the signal operation on the appropriate semaphore.
Dining-Philosophers Problem
Shared data Bowl of rice (data set) Semaphore chopstick [5] initialized to 1
Dining-Philosophers Problem Philosopher i :
do {wait(chopstick[i])wait(chopstick[(i+1) % 5])
…eat …
signal(chopstick[i]);signal(chopstick[(i+1) %
5]); …think …
} while (1);
The contents of chopstick[i]
decremented and tested is there is another philosophers is using the chopstick[i] ?
Problems with Semaphores
Incorrect use of semaphore operations:
signal (mutex) …. wait (mutex)
wait (mutex) … wait (mutex)
Omitting of wait (mutex) or signal (mutex) (or both)
Recommended