Upload
asher-boone
View
217
Download
1
Embed Size (px)
Citation preview
Study of Hurricane and Tornado Operating Systems
By Shubhanan Bakre
Agenda
• A brief introduction• A brief idea about the underlying hardware• Architecture of Hurricane and Tornado• Scheduling• Processes in Hurricane and Tornado• Memory Management• File System Management• Input/Output Management
Introduction
• Hurricane and Tornado are microkernel based operating systems.
• Hurricane is built for the Hector multiprocessor.• Tornado is built for the NUMAchine
multiprocessor.• Both these operating systems are built by the
Parallel Systems Group at the University of Toronto.
The Hector Multiprocessor
The NUMAchine
• Consists of 64 processors• Cache coherence hardware is present• Processors are connected in a two level hierarchy
Architecture of Hurricane and Tornado
• The operating system is divided into clusters.• A cluster is made up of three layers• A cluster can span more than one processors• All the operating system services are replicated to
each cluster• It provides modularity to the operating system• Clustering improves the scalability of the
operating system
Structure of A Cluster
Hierarchical Clustering
• Hurricane and Tornado use a single level of clustering
• It provides the user with an integrated view of the whole system
• Achieves tight coupling within the cluster• Maintains loose coupling among the clusters• It aims at achieving locality for data and code• Improves concurrency
Hierarchical Clustering
• A cluster size of one mimics a distributed system• A cluster size equal to the whole system size
behaves like a Shared Memory System• Changing the cluster size affects system behavior• Contention puts an upper bound on the size of a
cluster• Need for consistency puts a lower bound on the
cluster size
Coupling Vs Cost
Scheduling
• Two level scheduling is used• Within the cluster, the dispatcher is
responsible for scheduling• The second level takes care of load
balancing• It also decides the placement policy• It takes decisions on migration and
replication
Processes
• Management of the shared memory.
• Use of locking mechanisms to provide mutual exclusion.
• Processes can migrate to other clusters.
• Lock contention is reduced due to clustering
Processes
• Uses a hybrid lock which provides.– Low latency where coarse grained lock is used.
– High concurrency where a fine grained lock is used.
• Uses deadlock avoidance strategy for handling deadlocks.
• Inter-process communication takes place.– Within the cluster using protected procedure calls.
– Between the clusters using message passing.
Processes: Protected Procedure Calls (PPC)
• Aim of PPC is to– Avoid accessing shared region.– Avoid acquiring locks.
• Server acts as a passive object.• The client moves from one address space to
another.• A worker thread within the server takes care
of the client request.
Memory Management
• Supports traditional page based memory.• Every process is provided a virtual address space
called Hurricane region.• Takes care of the replication and page migration.• Protocols used for cache coherence are
– Update protocol– Validate Protocol
• Protocols used for replication are– Degree of replication– Replicate on write
Memory Management
• A Shared region is a shared data object accessed by a task.
• Functions are provided to access these shared region which include– ReadAccess– ReadDone– WriteAccess– WriteDone
Coherence Protocols: Update Protocol
• Consistency is maintained at WriteDone time
• Cache is invalidated for both ReadDone and WriteDone
•This approach is simple but very inefficient.
Coherence Protocols: Validate Protocol
• Consistency is maintained at the next “Access”.• Checks whether the copy in cache is valid or not.• Keeps a status vector for storing this information.
Replication Protocols
• Degree of Replication– Checks whether the distance from the nearest
replica is within a threshold value– If yes then replica is not created. Mapping is
done to that copy.– If no then a new copy is created.– Controls the degree of replication
Replication Protocols
• Replicate on Write– Replicates data on write request.– Can be efficient if the number of write requests
is greater than the number of read requests.– Can turn out to be inefficient due to the number
of coherence operations required.
Hurricane File System
• Goals of the Hurricane File System (HFS)– Flexibility – should support large number of
file structures and policies.– Customizability – should allow the application
to specify the policy and the file structure to be used.
– Efficiency – flexibility should be achieved at little CPU and I/O overhead.
Hurricane File System
• Uses object oriented techniques.• A Storage object forms the building block
of a file.• Storage object is an encapsulated object
having member functions and data.• The block storage object is the most
fundamental type of storage object.• In that it stores the actual file data.
Hurricane File System
• Categories of Storage Objects– Transitory Storage Objects
– Persistent Storage Objects
• Transitory Storage Objects are created at run time.• Persistent Storage Objects once created remain
fixed and get stored to the disk.• Persistent Storage Objects are stored in the same
way as file data.
Layers of the HFS
• Physical Server Layer– It directs requests for file data to disk.– It is responsible for policies related to disk
block placement, load balancing, locality management and cylinder clustering
– All Physical Server Layer objects are persistent– Example of Physical Server Layer Object
(PSO): read/write PSO
Layers of HFS
• Physical Server Layer Objects (PSO) are further divided into – Basic PSO classes– Composite PSO classes– Read/write PSO classes
Layers of the HFS
• Logical Server Layer– Provides functionality which can be outside the
physical server layer and should not be in the Application level library.
– All Logical Server Layer Objects (LSO) are persistent.
– Consists of various types of classes like Naming classes, Access-specific classes, Locking classes and Open authentication classes
– These objects can be used in hierarchical fashion.
Layers of HFS
Input/Output
• Alloc Stream Facility (ASF) provides the I/O facility in Hurricane.
• It is designed in the application layer.• Provides portability to the applications.• Provides better interface to the programming
language.• Improves performance by
– Reducing the amount of data copying.– Reducing the number of system calls.
Alloc Stream Facility
• Consists of three layers– The interface layer – implements the interface
modules.– The backplane – contains all the code that is
common to rest of the two layers.– The stream layer – interacts with the operating
system and manages buffering.
Conclusion
• Hierarchical clustering cannot be applied at a finer level in Hurricane.
• Modularity in Tornado can be improved due to object orientation.
• Due to different system architectures both these systems are distinct.
Questions