Concurency and synchronisation patterns

Preview:

DESCRIPTION

Concurency and synchronisation patterns. based on Pattern-Oriented Software Architecture, Patterns for Concurrent and Networked Objects, Volume 2 by Douglas Schmidt, Michael Stal , Hans Rohnert and Frank Buschmann. Design Pattern. - PowerPoint PPT Presentation

Citation preview

Concurency and synchronisationpatterns

based on Pattern-Oriented Software Architecture, Patterns for

Concurrent and Networked Objects, Volume 2by Douglas Schmidt, Michael Stal, Hans Rohnert and

Frank Buschmann

Design Pattern

• A design pattern is a general reusable solution to a commonly occurring problem in software design.

Agenda

• Synchronisation patterns– Scoped Locking– Strategized Locking– Thread-Safe Interface

• Concurency patterns– Active Object– Leader/Followers

Scoped Locking

The Scoped Locking C++ idiom ensures that a lock is acquired when control enters a scope and released automatically when control leaves the scope, regardless of the return path from the

scope.

Scoped Locking - Implementationclass Thread_Mutex_Guard {

public:Thread_Mutex_Guard (Thread_Mutex &lock) : lock_ (&lock), owner_

(false) {lock_->acquire ();owner_ = true;

}~Thread_Mutex_Guard () {

if (owner_) lock_->release ();}

private: Thread_Mutex *lock_;

bool owner_; Thread_Mutex_Guard (const Thread_Mutex_Guard&);void operator= (const Thread_Mutex_Guard &);

};

Scoped Locking – Explicit Accessorsclass Thread_Mutex_Guard {

public:Thread_Mutex_Guard (Thread_Mutex &lock) : lock_ (&lock), owner_ (false) {

acquire ();}~Thread_Mutex_Guard () {

release (); }void acquire () {

lock_->acquire ();owner_ = true;

}void release () {

if (owner_) {owner_ = false;lock_->release ();

}}

private: Thread_Mutex *lock_;

bool owner_; };

Scoped Locking – Exampleclass Test {

public:void test_function_1 {

Thread_Mutex_Guard quard(&lock_);//do something

}void test_function_2 {

Thread_Mutex_Guard quard(&lock_);//do something

}private:

Thread_Mutex lock_; };

Scoped Locking - Consequences

• Benefits– Increased robustness

• Liabilities– Potential for deadlock when used recursively.– Limitations with language-specific semantics.

Thread_Mutex_Guard guard (&lock_);Table_Entry *entry = lookup_or_create (path);if (entry == 0)

pthread_cancel(pthread_self());– Excessive compiler warnings

Strategized Locking

Similar to Scoped Locking but pass template or polymorphic lock object to the guard

constructor. class Lock {

public:virtual void acquire () = 0;virtual void release () = 0;

};

Strategized Locking - Exampleclass Lock {

public: virtual void acquire () = 0; virtual void release () = 0;

};

class Thread_Mutex_Lock : public Lock {public:

virtual void acquire () {

lock_.acquire (); }virtual void release () { lock_.release (); }

private:Thread_Mutex lock_;

};

Scoped Locking – Exampleclass Test {

public:void test_function_1 {

Thread_Mutex_Guard quard(&lock_);//do something

}void test_function_2 {

Thread_Mutex_Guard quard(&lock_);//do something

}private:

Thread_Mutex lock_; };

Strategized Locking - Exampletemplate <typename T> class quard_t {private:

bool locked; T& protector;

guard_t(const guard_t& orig);public: guard_t(T& _protector) : protector(_protector), locked(false) { protector.acquire(); } virtual ~guard_t() {

if (locked) protector.release();}

void release() {if (locke) { protector.release(); locked = false;}

}; void acquire() {protector.acquire(); locked = true;};};

Scoped Locking – Exampleclass Test {

public:void test_function_1 {

quard< Thread_Mutex > quard(&lock_);//do something

}void test_function_2 {

quard< Thread_Mutex > quard(&lock_);//do something

}private:

Thread_Mutex lock_; };

Strategized Locking - Example

typedef File_Cache<Null_Mutex> Content_Cache;

typedef File_Cache<Thread_Mutex> Content_Cache;

typedef File_Cache<RW_Lock> Content_Cache;

typedef File_Cache<Semaphore_Lock> Content_Cache;

Strategized Locking - Consequences

• Benefits– Enhanced flexibility and customization.– Decreased maintenance effort for components.– Improved reuse.

• Liabilities– Obtrusive locking.– Over-engineering.

Thread-Safe Interface

The Thread-Safe Interface design pattern minimizes locking overhead and ensures that

intra-component method calls do not incur 'self-deadlock' by trying to reacquire a lock that is

held by the component already.

Thread-Safe Interface Exampletemplate <class LOCK>class File_Cache {

public:const void *lookup (const string &path) const {

Guard<LOCK> guard (lock_);return lookup_i (path);

}void insert (const string &path) {

Guard<LOCK> guard (lock_);insert_i (path);

}private:

mutable LOCK lock_;const void *lookup_i (const string &path) const {

const void *file_pointer = check_cache_i(path);if (file_pointer == 0) {

insert_i (path);file_pointer = check_cache_i (path);

}return file_pointer;

}const void *check_cache_i (const string &) const {}void insert_i (const string &) {}

Consequences

• Benefits– Increased robustness.– Enhanced performance.– Simplification of software.

• Liabilities– Additional indirection and extra methods.– Potential deadlock.– Potential for misuse.– Potential overhead.

Active Object

The Active Object design pattern decouples method execution from method invocation to

enhance concurrency and simplify synchronized access to objects that reside in their own

threads of control.

Active Object

• Methods invoked on an object concurrently should not block the entire process,

• Synchronized access to shared objects should be simple,

• Applications should be designed to transparently leverage the parallelism available on a hardware/software platform

Active Object

Active Object

Consequences

• Benefits– Enhance application concurrency and simplify synchronization

complexity,

– Transparently leverage available parallelism,

– Method execution order can differ from method invocation order

• Liabilities– Performance overhead.

– Complicated debugging.

Leader/Followers

The Leader/Followers design pattern provides a concurrency model where multiple threads

can efficiently demultiplex events and dispatch event handlers that process I/O

handles shared by the threads.

Leader/Followers

• Efficient demultiplexing of I/O handles and threads,

• Minimize concurrency-related overhead,

• Prevent race conditions

Leader/Followers

Leader/Followers

Leader/Followers

Leader/Followers

Consequences

• Benefits– Performance enhancements.

• It enhances CPU cache affinity and eliminates unbound allocation and data buffer sharing between threads,

• It minimizes locking overhead by not exchanging data between threads,

• It can minimize priority inversion because no extra queuing is introduced in the server,

• It does not require a context switch to handle each event

– Programming simplicity.

• Liabilities– Implementation complexity.

– Lack of flexibility.

– Network I/O bottlenecks.