16
(2½ Hours) [Total Marks: 75 N. B.: (1) All questions are compulsory. (2) Make suitable assumptions wherever necessary and state the assumptions made. (3) Answers to the same question must be written together. (4) Numbers to the right indicate marks. (5) Draw neat labeled diagrams wherever necessary. (6) Use of Non-programmable calculators is allowed. 1. Attempt any three of the following: 15 a. Define Operating System. Explain the role of OS as extended machine b. Write a short note on fifth generation Operating System. c. Explain the micro kernel approach of Operating System design. d. List and explain any five system calls used in file management. e. Explain process states and possible transitions among these states using diagram. f. List the three categories and goals of scheduling algorithms. 2. Attempt any three of the following: 15 a. Explain the concept of running multiple programs without memory abstraction b. Write a note on swapping. c. Explain page table and Structure of a Page Table Entry using suitable diagram. d. Write a short note on Single-Level & Hierarchical Directory Systems e. Define file. Explain any four operations associated with file. f. Explain the contiguous allocation method for storing files on disk blocks 3. Attempt any three of the following: 15 a. Write a note on device controller. b. Explain RAID in details with its different levels (any four). c. Write a short note on Touch Screen. d. What are Preemptable and Non-preemptable Resources? Explain. e. Define Deadlock. List the four conditions that must hold for there to be a deadlock. f. Explain recovery from deadlock through preemption and rollback. 4. Attempt any three of the following: 15 a Explain type-1 and type-2 hypervisor using suitable diagram. b Write a note on clouds. c What are the requirements of virtualization? d Write a note on I/O virtualization. e Explain using suitable diagram multicomputer hardware interconnection technology. f Write any five comparisons between multiprocessor and distributed system. 5. Attempt any three of the following: 15 a. Explain using suitable diagram the kernel structure of Linux operating system. b. Explain the booting of Linux operating system. c. List and explain the design goals of android operating system. d. Write a note on hardware abstraction layer in windows operating system structure. e. Explain using suitable diagram NTFS master file table and its attribute. f. Briefly explain windows power management. . https://abdullahsurati.github.io/bscit https://abdullahsurati.github.io/bscit

(2½ Hours) [Total Marks: 75 All compulsory ... - BSC IT Pro...(2½ Hours) [Total Marks: 75 N. B.: (1) All questions are compulsory. (2) Make suitable assumptions wherever necessary

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

  • (2½ Hours) [Total Marks: 75

    N. B.: (1) All questions are compulsory.

    (2) Make suitable assumptions wherever necessary and state the assumptions made.

    (3) Answers to the same question must be written together.

    (4) Numbers to the right indicate marks.

    (5) Draw neat labeled diagrams wherever necessary.

    (6) Use of Non-programmable calculators is allowed.

    1. Attempt any three of the following: 15

    a. Define Operating System. Explain the role of OS as extended machine

    b. Write a short note on fifth generation Operating System.

    c. Explain the micro kernel approach of Operating System design.

    d. List and explain any five system calls used in file management.

    e. Explain process states and possible transitions among these states using diagram.

    f. List the three categories and goals of scheduling algorithms.

    2. Attempt any three of the following: 15

    a. Explain the concept of running multiple programs without memory abstraction

    b. Write a note on swapping.

    c. Explain page table and Structure of a Page Table Entry using suitable diagram.

    d. Write a short note on Single-Level & Hierarchical Directory Systems

    e. Define file. Explain any four operations associated with file.

    f. Explain the contiguous allocation method for storing files on disk blocks

    3. Attempt any three of the following: 15

    a. Write a note on device controller.

    b. Explain RAID in details with its different levels (any four).

    c. Write a short note on Touch Screen.

    d. What are Preemptable and Non-preemptable Resources? Explain.

    e. Define Deadlock. List the four conditions that must hold for there to be a deadlock.

    f. Explain recovery from deadlock through preemption and rollback.

    4. Attempt any three of the following: 15

    a Explain type-1 and type-2 hypervisor using suitable diagram.

    b Write a note on clouds.

    c What are the requirements of virtualization?

    d Write a note on I/O virtualization.

    e Explain using suitable diagram multicomputer hardware interconnection technology.

    f Write any five comparisons between multiprocessor and distributed system.

    5. Attempt any three of the following: 15

    a. Explain using suitable diagram the kernel structure of Linux operating system.

    b. Explain the booting of Linux operating system.

    c. List and explain the design goals of android operating system.

    d. Write a note on hardware abstraction layer in windows operating system structure.

    e. Explain using suitable diagram NTFS master file table and its attribute.

    f. Briefly explain windows power management. .

    https://abdullahsurati.github.io/bscit

    https://abdullahsurati.github.io/bscit

  • -------------------------------------- Solution Set --------------------------------------

    (2½ Hours) [Total Marks: 75

    N. B.: (1) All questions are compulsory.

    (2) Make suitable assumptions wherever necessary and state the assumptions made.

    (3) Answers to the same question must be written together.

    (4) Numbers to the right indicate marks.

    (5) Draw neat labeled diagrams wherever necessary.

    (6) Use of Non-programmable calculators is allowed.

    1. Attempt any three of the following: 15

    a. Define Operating System. Explain the role of OS as extended machine Def: ------------ any definition to the best of examiners knowledge (1 Mark)

    Role of OS as extended machine The architecture (instruction set, memory organization, I/O, and bus structure) of most computers at

    the machine-language level is primitive and awkward to program, especially for input/output.

    SATA (Serial ATA) hard disks used on most computers, no sane programmer would want to deal

    with this disk at the hardware level. Instead, a piece of software, called a disk driver, deals with the

    hardware and provides an interface to read and write disk blocks, without getting into the details.

    Operating systems contain many drivers for controlling I/O devices.

    All operating systems provide yet another layer of abstraction for using disks: files. Using this

    abstraction, programs can create, write, and read files, without having to deal with the messy details

    of how the hardware actually works.

    Abstraction is the key to managing all this complexity. Good abstractions turn a nearly impossible

    task into two manageable ones. The first is defining and implementing the abstractions. The second

    is using these abstractions to solve the problem at hand.

    It should be noted that the operating system’s real customers are the application programs (via the

    application programmers, of course). They are the ones who deal directly with the operating system

    and its abstractions. In contrast, end users deal with the abstractions provided by the user interface,

    either a command- line shell or a graphical interface. While the abstractions at the user interface may

    be similar to the ones provided by the operating system, this is not always the case. To make this

    point clearer, consider the normal Windows desktop and the line-oriented command prompt. Both

    are programs running on the Windows operating system and use the abstractions Windows provides,

    but they offer very different user interfaces. Similarly, a Linux user running Gnome or KDE sees a

    very different interface than a Linux user working directly on top of the underlying X Window

    System, but the underlying operating system abstractions are the same in both cases.

    b. Write a short note on fifth generation Operating System. 1.2.5 The Fifth Generation (1990–Present): The first true handheld phone appeared in the 1970s

    Nokia released the N9000, which literally combined two, mostly separate devices: a phone and a

    PDA (Personal Digital Assistant). In 1997, Ericsson coined the term smartphone for its GS88

    ‘‘Penelope.’’

    https://abdullahsurati.github.io/bscit

    https://abdullahsurati.github.io/bscit

  • Most smartphones in the first decade after their inception were running Symbian OS. It was the

    operating system of choice for popular brands like Samsung, Sony Ericsson, Motorola, and

    especially Nokia.

    Operating systems like RIM’s Blackberry OS (introduced for smartphones in 2002) and

    Apple’s iOS (released for the first iPhone in 2007) started eating into Symbian’s market share.

    In 2011, Nokia ditched Symbian and announced it would focus on Windows

    Phone as its primary platform. For some time, Apple and RIM were the toast of the town (although

    not nearly as dominant as Symbian had been), but it did not take very long for Android, a Linux-

    based operating system released by Google in 2008For phone manufacturers, Android had the

    advantage that it was open source and available under a permissive license. As a result, they could

    tinker with it and adapt it to their own hardware with ease. Also, it has a huge community of

    developers writing apps, mostly in the familiar Java programming language. Even so, the past years

    have shown that the dominance may not last, and Android’s competitors are eager to claw back

    some of its market share.

    c. Explain the micro kernel approach of Operating System design. The basic idea behind the microkernel design is to achieve high reliability by splitting the operating

    system up into small, well-defined modules, only one of which—the microkernel—runs in kernel

    mode and the rest run as relatively powerless ordinary user processes.

    In particular, by running each device driver and file system as a separate user process, a bug in one

    of these can crash that component, but cannot crash the entire system. Thus a bug in the audio driver

    will cause the sound to be garbled or stop, but will not crash the computer.

    A few of the better-known microkernels include Integrity, K42, L4, PikeOS, QNX, Symbian, and

    MINIX 3

    The MINIX 3 microkernel is only about 12,000 lines of C and some 1400 lines of assembler for very

    low-level functions such as catching interrupts and switching processes. The C code manages and

    schedules processes, handles interprocess communication (by passing messages between processes),

    and offers a set of about 40 kernel calls to allow the rest of the operating system to do its work.

    These calls perform functions like hooking handlers to interrupts, moving data between address

    spaces, and installing memory maps for new processes. The process structure of MINIX 3 is shown

    in Fig. 1-26, with the kernel call handlers labeled Sys.

    Outside the kernel, the system is structured as three layers of processes all running in user mode. The

    lowest layer contains the device drivers. Since they run in user mode, they do not have physical

    access to the I/O port space and cannot issue I/O commands directly. Instead, to program an I/O

    device, the driver builds a structure telling which values to write to which I/O ports and makes a

    kernel call telling the kernel to do the write. This approach means that the kernel can check to see

    that the driver is writing (or reading) from I/O it is authorized to use. Consequently (and unlike a

    monolithic design), a buggy audio driver cannot accidentally write on the disk.

    d. List and explain any five system calls use in file management. Many system calls relate to the file system. Any five – 1 mark each with explaination.

    https://abdullahsurati.github.io/bscit

    https://abdullahsurati.github.io/bscit

  • e. Explain process states and possible transitions among these states using diagram.

    In Fig. 2-2 we see a state diagram showing the three states a process may be in:

    1. Running (actually using the CPU at that instant).

    2. Ready (runnable; temporarily stopped to let another process run).

    3. Blocked (unable to run until some external event happens).

    Four transitions are possible among these three states,

    Transition 1 occurs when the operating system discovers that a process cannot continue right now.

    Transitions 2 and 3 are caused by the process scheduler, a part of the operating system, without the

    process even knowing about them. Transition 2 occurs when the scheduler decides that the running

    process has run long enough, and it is time to let another process have some CPU time. Transition 3

    occurs when all the other processes have had their fair share and it is time for the first process to get

    the CPU to run again.

    Transition 4 occurs when the external event for which a process was waiting (such as the arrival of

    some input) happens.

    f. List the three categories and goals of scheduling algorithms.

    https://abdullahsurati.github.io/bscit

    https://abdullahsurati.github.io/bscit

  • 2. Attempt any three of the following: 15

    a. Explain the concept of running multiple programs without memory abstraction The simplest memory abstraction is to have no abstraction at all.

    Every program simply saw the physical memory. When a program executed an instruction like MOV REGISTER1,1000

    the computer just moved the contents of physical memory location 1000 to REGISTER1.

    Under these conditions, it was not possible to have two running programs in memory at the same

    time. If the first program wrote a new value to, say, location

    2000, this would erase whatever value the second program was storing there. Nothing would work

    and both programs would crash almost immediately.

    When the system is organized in this way, generally only one process at a time can be running. As

    soon as the user types a command, the operating system copies the requested program from disk to

    memory and executes it. When the process finishes, the operating system displays a prompt

    character and waits for a user new command. When the operating system receives the command, it

    loads a new program into memory, overwriting the first one.

    However, even with no memory abstraction, it is possible to run multiple programs at the same time.

    What the operating system has to do is save the entire contents of memory to a disk file, then bring

    in and run the next program. As long as there is only one program at a time in memory, there are no

    conflicts.

    With the addition of some special hardware, it is possible to run multiple programs concurrently.

    Memory was divided into 2-KB blocks and each was assigned a 4-bit protection key held in special

    registers inside the CPU. A machine with a 1-MB memory needed only 512 of these 4-bit registers

    for a total of 256 bytes of key storage. The PSW (Program Status Word) also contained a 4-bit key.

    The core problem here is that the two programs both reference absolute physical memory. That is

    not what we want at all. What we want is that each program can reference a private set of addresses

    local to it.

    b. Write a note on swapping. If the physical memory of the computer is large enough to hold all the processes, the schemes

    described so far will more or less do. But in practice, the total amount of RAM needed by all the

    processes is often much more than can fit in memory. On a typical Windows, OS X, or Linux

    system, something like 50–100 processes or more may be started up as soon as the computer is

    booted.

    Tw o general approaches to dealing with memory overload have been developed over the years. The

    simplest strategy, called swapping, consists of bringing in each process in its entirety, running it for

    a while, then putting it back on the disk. Idle processes are mostly stored on disk, so they do not take

    up any memory when they are not running (although some of them wake up periodically to do their

    work, then go to sleep again). The other strategy, called virtual memory.

    The operation of a swapping system is illustrated in Fig. 3-4. Initially, only process A is in memory.

    Then processes B and C are created or swapped in from disk. In Fig. 3-4(d) A is swapped out to disk.

    Then D comes in and B goes out. Finally A comes in again. Since A is now at a different location,

    addresses contained in it must be relocated, either by software when it is swapped in or (more likely)

    by hardware during program execution.

    https://abdullahsurati.github.io/bscit

    https://abdullahsurati.github.io/bscit

  • When swapping creates multiple holes in memory, it is possible to combine them all into one big one

    by moving all the processes downward as far as possible. This technique is known as memory

    compaction. It is usually not done because it requires a lot of CPU time. For example, on a 16-GB

    machine that can copy 8 bytes in 8 nsec, it would take about 16 sec to compact all of memory.

    If processes are created with a fixed size that never changes, then the allocation is simple: the

    operating system allocates exactly what is needed, no more and no less.

    c. Explain page table and Structure of a Page Table Entry using suitable diagram. Page Tables: (3 Marks) In a simple implementation, the mapping of virtual addresses onto physical addresses can be summarized as follows: the virtual address is split into a virtual page number (high-order bits) and an

    offset (low-order bits).

    The virtual page number is used as an index into the page table to find the entry for that virtual page.

    From the page table entry, the page frame number (if any) is found. The page frame number is

    attached to the high-order end of the offset, replacing the virtual page number, to form a physical

    address that can be sent to the memory.

    Thus, the purpose of the page table is to map virtual pages onto page frames. Mathematically

    speaking, the page table is a function, with the virtual page number as argument and the physical

    frame number as result. Using the result of this function, the virtual page field in a virtual address

    can be replaced by a page frame field, thus forming a physical memory address.

    Structure of a Page Table Entry (2 marks)

    The size of page table entry is commonly 32 bit.

    The most important field is the Pa g e frame number. After all, the goal of the page mapping is to

    output this value. Next to it we have the Present/absent bit. If this bit is 1, the entry is valid and can

    be used. If it is 0, the virtual page to which the entry belongs is not currently in memory. Accessing a

    page table entry with this bit set to 0 causes a page fault.

    The Protection bits tell what kinds of access are permitted. In the simplest form, this field contains 1

    bit, with 0 for read/write and 1 for read only.

    The Modified and Referenced bits keep track of page usage. When a page is written to, the hardware

    automatically sets the Modified bit (The bit is sometimes called the dirty bit).

    The Referenced bit is set whenever a page is referenced, either for reading or for writing.

    The last bit allows caching to be disabled for the page.

    d. Write a short note on Single-Level & Hierarchical Directory Systems Single-Level Directory Systems (2 marks) The simplest form of directory system is having one directory containing all the files. Sometimes it

    is called the root directory, but since it is the only one, the name does not matter much. On early

    personal computers, this system was common, in part because there was only one user.

    The advantages of this scheme are its simplicity and the ability to locate files quickly—there is only

    one place to look, after all. It is sometimes still used on simple embedded devices such as digital

    cameras and some portable music players.

    https://abdullahsurati.github.io/bscit

    https://abdullahsurati.github.io/bscit

  • Hierarchical Directory Systems (3 marks) Groups related files together. With this approach, there can be as many directories as are needed to

    group the files in natural ways. Furthermore, if multiple users share a common file server, as is the

    case on many company networks, each user can have a private root directory for his or her own

    hierarchy. This approach is shown in Fig. 4-7. Here, the directories A, B, and C contained in the root

    directory each belong to a different user, two of whom have created subdirectories for projects they

    are working on.

    The ability for users to create an arbitrary number of

    subdirectories provides a powerful structuring tool for users to organize their work. For this reason,

    nearly all modern file systems are organized in this manner.

    e. Define file. Explain any four operations associated with file. Def: ------------ any definition to the best of examiners knowledge. (1 mark)

    Any four operations with explanation ( 1 mark each) 1. Create. 5. Read. 9. Get attributes.

    2. 2. Delete. 6. Wr ite. 10. Set attributes. 3. Open. 7. Append. 11. Rename. 4. Close. 8. Seek.

    f. Explain the contiguous allocation method for storing files on disk blocks The simplest allocation scheme is to store each file as a contiguous run of disk blocks. Thus on a

    disk with 1-KB blocks, a 50-KB file would be allocated 50 consecutive blocks. With 2-KB blocks, it

    would be allocated 25 consecutive blocks. We see an example of contiguous storage allocation in

    Fig. 4-10(a). Here the first 40 disk blocks are shown, starting with block 0 on the left. Initially, the

    disk was empty. Then a file A, of length four blocks, was written to disk starting at the beginning

    (block 0). After that a six-block file, B, was written starting right after the end of file A.

    Note that each file begins at the start of a new block, so that if file A was really 3½ blocks, some

    space is wasted at the end of the last block. In the figure, a total of seven files are shown, each one

    starting at the block following the end of the previous one. Shading is used just to make it easier to

    tell the files apart. It has no actual significance in terms of storage.

    Contiguous disk-space allocation has two significant advantages. First, it is simple to implement

    because keeping track of where a file’s blocks are is reduced to remembering two numbers: the disk

    address of the first block and the number of blocks in the file. Given the number of the first block,

    the number of any other block can be found by a simple addition.

    https://abdullahsurati.github.io/bscit

    https://abdullahsurati.github.io/bscit

  • Second, the read performance is excellent because the entire file can be read from the disk in a single

    operation. Only one seek is needed (to the first block). After that, no more seeks or rotational delays

    are needed, so data come in at the full bandwidth of the disk. Thus contiguous allocation is simple to

    implement and has high performance.

    Unfortunately, contiguous allocation also has a very serious drawback: over the course of time, the

    disk becomes fragmented.

    3. Attempt any three of the following: 15

    a. Write a note on device controller. I/O units often consist of a mechanical component and an electronic component. It is possible to

    separate the two portions to provide a more modular and general design. The electronic component

    is called the device controller or adapter.

    The controller card usually has a connector on it, into which a cable leading to the device itself can

    be plugged. Many controllers can handle two, four, or even eight identical devices. If the interface

    between the controller and device is a standard interface, either an official ANSI, IEEE, or ISO

    standard or a de facto one, then companies can make controllers or devices that fit that interface.

    Many companies, for example, make disk drives that match the SATA, SCSI, USB, Thunderbolt, or

    FireWire (IEEE 1394) interfaces.

    ANY ONE EXAMPLE OF DISK / LCD controller

    The interface between the controller and the device is often a very low-level one. A disk, for

    example, might be formatted with 2,000,000 sectors of 512 bytes per track. What actually comes off

    the drive, howev er, is a serial bit stream, starting with a preamble, then the 4096 bits in a sector,

    and finally a checksum, or ECC (Error-Correcting Code). The preamble is written when the disk

    is formatted and contains the cylinder and sector number, the sector size, and similar data, as well as

    synchronization information. The controller’s job is to convert the serial bit stream into a block of

    bytes and perform any error correction necessary. The block of bytes is typically first assembled, bit

    by bit, in a buffer inside the controller. After its checksum has been verified and the block has been

    declared to be error free, it can then be copied to main memory.

    The controller for an LCD display monitor also works as a bit serial device at an equally low lev el.

    It reads bytes containing the characters to be displayed from memory and generates the signals to

    modify the polarization of the backlight for the corresponding pixels in order to write them on

    screen. If it were not for the display controller, the operating system programmer would have to

    explicitly program the electric fields of all pixels. With the controller, the operating system

    initializes the controller with a few parameters, such as the number of characters or pixels per line

    and number of lines per screen, and lets the controller take care of actually driving the electric fields.

    b. Explain RAID in details with its different levels (any four). RAID : Redundant Array of Independent Disks,

    The fundamental idea behind a RAID is to install a box full of disks next to the computer, typically a

    large server, replace the disk controller card with a RAID controller, copy the data over to the RAID,

    and then continue normal operation.

    The data are distributed over the drives, to allow parallel operation.

    RAID level 0 : It consists of viewing the virtual single disk simulated by the RAID as being divided

    up into strips of k sectors each, with sectors 0 to k 1 being strip 0, sectors k to 2k 1 strip 1, and so

    on. For k 1, each strip is a sector; for k 2 a strip is two sectors, etc. The RAID level 0

    organization writes consecutive strips over the drives in round-robin fashion.

    RAID level 1: It duplicates all the disks, so there are four primary disks and four backup disks. On a

    write, every strip is written twice. On a read, either copy can be used, distributing the load over more

    drives.

    RAID level 2 works on a word basis, possibly even a byte basis

    RAID level 3 is a simplified version of RAID level 2.

    RAID levels 4 and 5 work with strips again, not individual words with parity, and do not require

    synchronized drives.

    https://abdullahsurati.github.io/bscit

    https://abdullahsurati.github.io/bscit

  • Raid level 6 is similar to RAID level 5, except that an additional parity block is used. In other words,

    the data is striped across the disks with two parity blocks instead of one.

    c. Write a short note on Touch Screen. Touch Screens: More and more the screen is used as an input device also. Especially on

    smartphones, tablets and other ultra-portable devices it is convenient to tap and swipe aw ay at the

    screen with your finger (or a stylus). The user experience is different and more intuitive than with a

    mouse-like device, since the user interacts directly with the objects on the screen. Research has

    shown that even orangutans and other primates like little children are capable of operating touch-

    based devices.

    A touch device is not necessarily a screen. Touch devices fall into two categories: opaque and

    transparent. A typical opaque touch device is the touchpad on a notebook computer. An example of a

    transparent device is the touch screen on a smartphone or tablet. In this section, however, we limit

    ourselves to touch screens. Like many things that have come into fashion in the computer industry,

    touch screens are not exactly new. As early as 1965, E.A. Johnson of the British Royal

    Radar Establishment described a (capacitive) touch display that, while crude, served as precursor of

    the displays we find today. Most modern touch screens are either resistive or capacitive.

    Resistive screens have a flexible plastic surface on top. The plastic in itself is nothing too special,

    except that is more scratch resistant than your garden variety plastic.

    Capacitive Screens have two hard surfaces, typically glass, each coated with

    ITO (Indium Tin Oxide). A typical configuration is to have ITO added to each surface in parallel

    lines, where the lines in the top layer are perpendicular to those in the bottom layer. For instance, the

    top layer may be coated in thin lines in a vertical direction, while the bottom layer has a similarly

    striped pattern in the horizontal direction. The two charged surfaces, separated by air, form a grid of

    really small capacitors. Voltages are applied alternately to the horizontal and vertical lines, while the

    voltage values, which are affected by the capacitance of each intersection, are read out on the other

    ones. When you put your finger onto the screen, you change the local capacitance. By very

    accurately measuring the miniscule voltage changes everywhere, it is possible to discover the

    location of the finger on the screen. This operation is repeated many times per second with the

    coordinates touched fed to the device driver as a stream of (x, y) pairs.

    d. What are Preemptable and Non-preemptable Resources? Explain. Preemptable and Nonpreemptable Resources: Resources come in two types: preemptable and nonpreemptable. A preemptable resource is one

    that can be taken away from the process owning it with no ill effects. Memory is an example of a

    preemptable resource. Consider, for example, a system with 1 GB of user memory, one printer, and

    two 1-GB processes that each want to print something. Process A requests and gets the printer, then

    starts to compute the values to print. Before it has finished the computation, it exceeds its time

    quantum and is swapped out to disk.

    A nonpreemptable resource, in contrast, is one that cannot be taken away from its current owner

    without potentially causing failure. If a process has begun to burn a Blu-ray, suddenly taking the

    Blu-ray recorder away from it and giving it to another process will result in a garbled Blu-ray. Blu-

    ray recorders are not preemptable at an arbitrary moment.

    https://abdullahsurati.github.io/bscit

    https://abdullahsurati.github.io/bscit

  • Whether a resource is preemptible depends on the context. On a standard PC, memory is preemptible

    because pages can always be swapped out to disk to recover it. However, on a smartphone that does

    not support swapping or paging, deadlocks cannot be avoided by just swapping out a memory hog.

    In general, deadlocks involve nonpreemptable resources. Potential deadlocks that involve

    preemptable resources can usually be resolved by reallocating resources from one process to another.

    e. Define Deadlock. List the four conditions that must hold for there to be a deadlock. Deadlock can be defined formally as follows: (1 mark)

    A set of processes is deadlocked if each process in the set is waiting for an event that only another

    process in the set can cause.

    Coffman et al. (1971) showed that four conditions must hold for there to be a (resource) deadlock:

    1. Mutual exclusion condition. Each resource is either currently assigned to exactly one process or is

    available.

    2. Hold-and-wait condition. Processes currently holding resources that were granted earlier can

    request new resources.

    3. No-preemption condition. Resources previously granted cannot be forcibly taken away from a

    process. They must be explicitly released by the process holding them.

    4. Circular wait condition. There must be a circular list of two or more processes, each of which is

    waiting for a resource held by the next member of the chain.

    f. Explain recovery from deadlock through preemption and rollback. Recovery through Preemption ( 2marks or 3 marks according to the contents)

    In some cases it may be possible to temporarily take a resource away from its current owner and give

    it to another process. In many cases, manual intervention may be required, especially in batch-

    processing operating systems running on mainframes.

    For example, to take a laser printer away from its owner, the operator can collect all the sheets

    already printed and put them in a pile. Then the process can be suspended (marked as not runnable).

    At this point the printer can be assigned to another process. When that process finishes, the pile of

    printed sheets can be put back in the printer’s output tray and the original process restarted.

    The ability to take a resource away from a process, have another process use it, and then give it back

    without the process noticing it is highly dependent on the nature of the resource. Recovering this

    way is frequently difficult or impossible. Choosing the process to suspend depends largely on which

    ones have resources that can easily be taken back.

    Recovery through Rollback (3marks or 2 marks according to the contents)

    If the system designers and machine operators know that deadlocks are likely, they can arrange to

    have processes checkpointed periodically. Checkpointing a process means that its state is written to

    a file so that it can be restarted later. The checkpoint contains not only the memory image, but also

    the resource state, in other words, which resources are currently assigned to the process. To be most

    effective, new checkpoints should not overwrite old ones but should be written to new files, so as the

    process executes, a whole sequence accumulates. When a deadlock is detected, it is easy to see

    which resources are needed. To do the recovery, a process that owns a needed resource is rolled back

    to a point in time before it acquired that resource by starting at one of its earlier checkpoints.

    All the work done since the checkpoint is lost (e.g., output printed since the checkpoint must be

    discarded, since it will be printed again). In effect, the process is reset to an earlier moment when it

    did not have the resource, which is now assigned to one of the deadlocked processes. If the restarted

    process tries to acquire the resource again, it will have to wait until it becomes available.

    4. Attempt any three of the following: 15

    a Explain type-1 and type-2 hypervisor using suitable diagram.

    TYPE 1 AND TYPE 2 HYPERVISORS Goldberg (1972) distinguished between two approaches to virtualization. One kind of hypervisor,

    dubbed a type 1 hypervisor Technically, it is like an operating system, since it is the only program

    running in the most privileged mode. Its job is to support multiple copies of the actual hardware,

    called virtual machines, similar to the processes a normal operating system runs.

    In contrast, a type 2 hypervisor, is a different kind of animal. It is a program that relies on, say,

    Windows or Linux to allocate and schedule resources, very much like a regular process. Of course,

    the type 2 hypervisor still pretends to be a full computer with a CPU and various devices.

    Both types of hypervisor must execute the machine’s instruction set in a safe manner.

    For instance, an operating system running on top of the hypervisor may change and ev en mess up its

    own page tables, but not those of others.

    https://abdullahsurati.github.io/bscit

    https://abdullahsurati.github.io/bscit

  • The operating system running on top of the hypervisor in both cases is called the guest operating

    system. For a type 2 hypervisor, the operating system running on the hardware is called the host

    operating system. The first type 2 hypervisor on the x86 market was VMware Workstation

    Type 2 hypervisors, sometimes referred to as hosted hypervisors, depend for much of their

    functionality on a host operating system such as Windows, Linux, or OS X. When it starts for the

    first time, it acts like a newly booted computer and expects to find a DVD, USB drive, or CD-ROM

    containing an operating system in the drive. This time, however, the drive could be a virtual device.

    For instance, it is possible to store the image as an ISO file on the hard drive of the host and have the

    hypervisor pretend it is reading from a proper DVD drive. It then installs the operating system to its

    virtual disk (again really just a Windows, Linux, or OS X file) by running the installation program

    found on the DVD. Once the guest operating system is installed on the virtual disk, it can be booted

    and run.

    b Write a note on clouds.

    Virtualization technology played a crucial role in the dizzying rise of cloud computing. There are

    many clouds. Some clouds are public and available to anyone willing to pay for the use of resources,

    others are private to an organization. Likewise, different clouds offer different things. Some give

    their users access to physical hardware, but most virtualize their environments. Some offer the bare

    machines, virtual or not, and nothing more, but others offer software that is ready to use and can be

    combined in interesting ways, or platforms that make it easy for their users to develop new services.

    Cloud providers typically offer different categories of resources, such as ‘‘big machines’’ versus

    ‘‘little machines,’’ etc.

    The National Institute of Standards and Technology lists fiv e essential characteristics:

    1. On-demand self-service. Users should be able to provision resources automatically, without

    requiring human interaction. 2. Broad network access. All these resources should be available over

    the network via standard mechanisms so that heterogeneous devices can make use of them.

    3. Resource pooling. The computing resource owned by the provider should be pooled to serve

    multiple users and with the ability to assign and reassign resources dynamically. The users generally

    do not even know the exact location of ‘‘their’’ resources or even which country they are located in.

    4. Rapid elasticity. It should be possible to acquire and release resources elastically, perhaps even

    automatically, to scale immediately with the users’ demands.

    5. Measured service. The cloud provider meters the resources used in a way that matches the type

    of service agreed upon.

    Clouds that offer direct access to a virtual machine, which the user can use in any way he sees fit.

    Thus, the same cloud may run different operating systems, possibly on the same hardware. In cloud

    terms, this is known as IAAS (Infrastructure As A Service), as opposed to PAAS (Platform As A

    Service, which delivers an environment that includes things such as a specific OS, database, Web

    server, and so on), SAAS (Software As A Service,which offers access to specific software, such as

    Microsoft Office 365, or Google Apps), and many other types of as-a-service.

    c What are the requirements of virtualization? It is important that virtual machines act just like the real McCoy. In particular, it must be possible to

    boot them like real machines and install arbitrary operating systems on them, just as can be done on

    the real hardware. It is the task of the hypervisor to provide this illusion and to do it efficiently.

    Indeed, hypervisors should score well in three dimensions:

    1. Safety: the hypervisor should have full control of the virtualized resources.

    2. Fidelity: the behavior of a program on a virtual machine should be identical to that of the same

    program running on bare hardware.

    3. Efficiency: much of the code in the virtual machine should run without intervention by the

    hypervisor.

    This problem was finally solved when Intel and AMD introduced virtualization in their CPUs

    starting in 2005 (Uhlig, 2005). On the Intel CPUs it is called VT (Virtualization Technology); on

    https://abdullahsurati.github.io/bscit

    https://abdullahsurati.github.io/bscit

  • the AMD CPUs it is called SVM (Secure Virtual Machine). We will use the term VT in a generic

    sense below. Both were inspired by the IBM VM/370 work, but they are slightly different. The basic

    idea is to create containers in which virtual machines can be run. When a guest operating system is

    started up in a container, it continues to run there until it causes an exception and traps to the

    hypervisor, for example, by executing an I/O instruction. The set of operations that trap is controlled

    by a hardware bitmap set by the hypervisor. With these extensions the classical trap-and-emulate

    virtual machine approach becomes possible.

    d Write a note on I/O virtualization.

    I/O virtualization: The guest operating system will typically start out probing the hardware to find

    out what kinds of I/O devices are attached. These probes will trap to the hypervisor. What should the

    hypervisor do? One approach is for it to report back that the disks, printers, and so on are the ones

    that the hardware actually has. The guest will then load device drivers for these devices and try to

    use them. When the device drivers try to do actual I/O, they will read and write the device’s

    hardware device registers. These instructions are sensitive and will trap to the hypervisor, which

    could then copy the needed values to and from the hardware registers, as needed.

    Each guest OS could think it owns an entire disk partition, and there may be many more virtual

    machines (hundreds) than there are actual disk partitions. The usual solution is for the hypervisor to

    create a file or region on the actual disk for each virtual machine’s physical disk. Since the guest

    OS is trying to control a disk that the real hardware has (and which the hypervisor understands), it

    can convert the block number being accessed into an offset into the file or disk region being used for

    storage and do the I/O.

    It is also possible for the disk that the guest is using to be different from the real one. the hypervisor

    could advertise to the guest OS that it has a plain old IDE disk and let the guest OS install an IDE

    disk driver. When this driver issues IDE disk commands, the hypervisor converts them into

    commands to drive the new disk. This strategy can be used to upgrade the hardware without

    changing the software. Companies wanted to buy new and faster hardware but did not want to

    change their software. Virtual machine technology made this possible.

    Another interesting trend related to I/O is that the hypervisor can take the role of a virtual switch. In

    this case, each virtual machine has a MAC address and the hypevisor switches frames from one

    virtual machine to another—just like an Ethernet switch would do. Virtual switches have sev eral

    advantages. For instance, it is very easy to reconfigure them. Also, it is possible to augment the

    switch with additional functionality, for instance for additional security.

    e Explain using suitable diagram multicomputer hardware interconnection technology.

    f Write any five comparisons between multiprocessor and distributed system.

    https://abdullahsurati.github.io/bscit

    https://abdullahsurati.github.io/bscit

  • ( 1 mark each)

    5. Attempt any three of the following: 15

    a. Explain using suitable diagram the kernel structure of Linux operating system.

    The kernel sits directly on the hardware and enables interactions with I/O devices and the memory

    management unit and controls CPU access to them. At the lowest level, as shown in Fig. 10-3 it

    contains interrupt handlers, which are the primary way for interacting with devices, and the low-

    level dispatching mechanism. This dispatching occurs when an interrupt happens. The low-level

    code here stops the running process, saves its state in the kernel process structures, and starts the

    appropriate driver. Process dispatching also happens when the kernel completes some operations and

    it is time to start up a user process again. The dispatching code is in assembler and is quite distinct

    from scheduling. Next, we divide the various kernel subsystems into three main components.

    The I/O component in Fig. 10-3 contains all kernel pieces responsible for interacting with devices

    and performing network and storage I/O operations. At the highest level, the I/O operations are all

    integrated under a VFS (Virtual File System) layer. That is, at the top level, performing a read

    operation on a file.

    At the lowest level, all I/O operations pass through some device driver. All Linux drivers are

    classified as either character-device drivers or block-device drivers, the main difference being that

    seeks and random accesses are allowed on block devices and not on character devices.

    Above the device-driver lev el, the kernel code is different for each device type.

    Character devices may be used in two different ways. Some programs, such as

    visual editors like vi and emacs, want every keystroke as it is hit. Raw terminal

    (tty) I/O makes this possible. Other software, such as the shell, is line oriented, allowing

    users to edit the whole line before hitting ENTER to send it to the program.

    In this case the character stream from the terminal device is passed through a socalled

    line discipline, and appropriate formatting is applied.

    On top of the disk drivers is the I/O scheduler, which is responsible for ordering

    and issuing disk-operation requests in a way that tries to conserve wasteful disk

    head movement or to meet some other system policy.

    b. Explain the booting of Linux operating system. When the computer starts, the BIOS performs Power-

    On-Self-Test (POST) and initial device discovery and initialization, since the

    OS’ boot process may rely on access to disks, screens, keyboards, and so on. Next,

    the first sector of the boot disk, the MBR (Master Boot Record), is read into a

    fixed memory location and executed. This sector contains a small (512-byte) program

    that loads a standalone program called boot from the boot device, such as a

    SATA or SCSI disk. The boot program first copies itself to a fixed high-memory

    https://abdullahsurati.github.io/bscit

    https://abdullahsurati.github.io/bscit

  • address to free up low memory for the operating system.

    Once moved, boot reads the root directory of the boot device. To do this, it

    must understand the file system and directory format, which is the case with some

    bootloaders such as GRUB (GRand Unified Bootloader). Other popular bootloaders,

    such as Intel’s LILO(LInux LOader ).

    Then boot reads in the operating system kernel and jumps to it. At this point,

    it has finished its job and the kernel is running.

    The kernel start-up code is written in assembly language and is highly machine

    dependent. Typical work includes setting up the kernel stack, identifying the CPU

    type, calculating the amount of RAM present, disabling interrupts, enabling the

    MMU, and finally calling the C-language main procedure to start the main part of

    the operating system.

    Next the kernel data structures are allocated. Most are of fixed size, but a few,

    such as the page cache and certain page table structures, depend on the amount of

    RAM available.

    Once all the hardware has been configured, the next thing to do is to carefully

    handcraft process 0, set up its stack, and run it. Process 0 continues initialization,

    doing things like programming the real-time clock, mounting the root file system,

    and creating init (process 1) and the page daemon (process 2).

    Init checks its flags to see if it is supposed to come up single user or multiuser.

    Then it reads /etc/ttys,

    which lists the terminals and some of their properties. For each enabled terminal, it

    forks off a copy of itself, which does some housekeeping and then executes a program

    called getty.

    Getty sets the line speed and other properties for each line (some of which may

    be modems, for example), and then displays

    login: Login then asks for a password

    If it is correct, login replaces itself with the user’s shell, which then

    waits for the first command.

    c. List and explain the design goals of android operating system. Design Goals ( any five 1 mark each) A number of key design goals for the Android platform evolved during its development:

    1. Provide a complete open-source platform for mobile devices. 2. Strongly support proprietary third-party applications with a robust and stable API. 3. Allow all third-party applications 4. Provide an application security model 5. Support typical mobile user interaction: 6. Manage application processes for users, simplifying the user experience 7. Encourage applications to interoperate and collaborate in rich and secure ways. 8. Create a full general-purpose operating system.

    d. Write a note on hardware abstraction layer in windows operating system structure. The Hardware Abstraction Layer:

    One goal of Windows is to make the system portable across hardware platforms.

    Ideally, to bring up an operating system on a new type of computer system

    it should be possible to just recompile the operating system on the new platform.

    Unfortunately, it is not this simple. While many of the components in some layers

    of the operating system can be largely portable (because they mostly deal with internal

    data structures and abstractions that support the programming model), other

    layers must deal with device registers, interrupts, DMA, and other hardware features

    that differ significantly from machine to machine.

    Most of the source code for the NTOS kernel is written in C rather than assembly

    language (only 2% is assembly on x86, and less than 1% on x64). However, all

    this C code cannot just be scooped up from an x86 system, plopped down on, say,

    an ARM system, recompiled, and rebooted owing to the many hardware differences

    between processor architectures that have nothing to do with the different instruction

    sets and which cannot be hidden by the compiler. Languages like C make

    it difficult to abstract away some hardware data structures and parameters, such as

    the format of page-table entries and the physical memory page sizes and word

    length, without severe performance penalties. All of these, as well as a slew of

    hardware-specific optimizations, would have to be manually ported even though

    they are not written in assembly code.

    Hardware details about how memory is organized on large servers, or what

    https://abdullahsurati.github.io/bscit

    https://abdullahsurati.github.io/bscit

  • hardware synchronization primitives are available, can also have a big impact on

    higher levels of the system. For example, NT’s virtual memory manager and the

    kernel layer are aware of hardware details related to cache and memory locality.

    Throughout the system NT uses compare&swap synchronization primitives, and it

    would be difficult to port to a system that does not have them. Finally, there are

    many dependencies in the system on the ordering of bytes within words. On all the

    systems NT has ever been ported to, the hardware was set to little-endian mode.

    Besides these larger issues of portability, there are also minor ones even between

    different parentboards from different manufacturers. Differences in CPU

    versions affect how synchronization primitives like spin-locks are implemented.

    There are several families of support chips that create differences in how hardware

    interrupts are prioritized, how I/O device registers are accessed, management of

    DMA transfers, control of the timers and real-time clock, multiprocessor synchronization,

    working with firmware facilities such as ACPI (Advanced Configuration

    and Power Interface), and so on. Microsoft made a serious attempt to hide these

    types of machine dependencies in a thin layer at the bottom called the HAL, as

    mentioned earlier. The job of the HAL is to present the rest of the operating system

    with abstract hardware that hides the specific details of processor version, support

    chipset, and other configuration variations. These HAL abstractions are presented

    in the form of machine-independent services (procedure calls and macros)

    that NTOS and the drivers can use.

    e. Explain using suitable diagram NTFS master file table and its attribute.

    Windows supports several file systems, the most important of which are FAT-16, FAT-32, and

    NTFS (NT File System).

    https://abdullahsurati.github.io/bscit

    https://abdullahsurati.github.io/bscit

  • Each NTFS volume (e.g., disk partition) contains files, directories, bitmaps, and other data

    structures.

    The principal data structure in each volume is the MFT (Master File Table), which is a linear

    sequence of fixed-size 1-KB records. Each MFT record describes one file or one directory. It

    contains the file’s attributes, such as its name and timestamps, and the list of disk addresses where its

    blocks are located. If a file is extremely large, it is sometimes necessary to use two or more MFT

    records to contain the list of all the blocks, in which case the first MFT record, called the base

    record, points to the additional MFT records.

    The MFT is itself a file and as such can be placed anywhere within the volume,

    thus eliminating the problem with defective sectors in the first track. Furthermore,

    the file can grow as needed, up to a maximum size of 248 records.

    The MFT is shown in Fig. 11-39. Each MFT record consists of a sequence of

    (attribute header, value) pairs. Each attribute begins with a header telling which

    attribute this is and how long the value is. Some attribute values are variable

    length, such as the file name and the data. If the attribute value is short enough to

    fit in the MFT record, it is placed there. If it is too long, it is placed elsewhere on

    the disk and a pointer to it is placed in the MFT record. This makes NTFS very efficient

    for small files, that is, those that can fit within the MFT record itself.

    The first 16 MFT records are reserved for NTFS metadata files, as illustrated

    in Fig. 11-39. Each record describes a normal file that has attributes and data

    blocks, just like any other file. Each of these files has a name that begins with a

    dollar sign to indicate that it is a metadata file. The first record describes the MFT

    file itself. In particular, it tells where the blocks of the MFT file are located so that

    the system can find the MFT file. Clearly, Windows needs a way to find the first

    block of the MFT file in order to find the rest of the file-system information. The

    way it finds the first block of the MFT file is to look in the boot block, where its

    address is installed when the volume is formatted with the file system.

    f. Briefly explain windows power management. The power manager rides herd on power usage throughout the system. Historically

    management of power consumption consisted of shutting off the monitor

    display and stopping the disk drives from spinning.

    Newer power-management facilities include reducing the power consumption

    of components when the system is not in use by switching individual devices to

    standby states, or even powering them off completely using soft power switches.

    Windows supports a special shut down mode called hibernation, which copies

    all of physical memory to disk and then reduces power consumption to a small

    trickle (notebooks can run weeks in a hibernated state) with little battery drain.

    An alternative to hibernation is standby mode where the power manager reduces

    the entire system to the lowest power state possible, using just enough power

    to the refresh the dynamic RAM. Because memory does not need to be copied to

    disk, this is somewhat faster than hibernation on some systems.

    Despite the availability of hibernation and standby, many users are still in the

    habit of shutting down their PC when they finish working. Windows uses hibernation

    to perform a pseudo shutdown and startup, called HiberBoot, that is much faster

    than normal shutdown and startup. When the user tells the system to shutdown,

    HiberBoot logs the user off and then hibernates the system at the point they would

    normally login again. Later, when the user turns the system on again, HiberBoot

    will resume the system at the login point.

    CS (connected standby). CS is possible on systems with special networking

    hardware which is able to listen for traffic on a small set of connections using

    much less power than if the CPU were running.

    Many applications today are implemented with both local code and services in

    the cloud. Windows provides WNS (Windows Notification Service) which allows

    third-party services to push notifications to a Windows device in CS without requiring

    the CS network hardware to specifically listen for packets from the third

    party’s servers. WNS notifications can signal time-critical events, such as the arrival

    of a text message or a VoIP call. When a WNS packet arrives, the processor

    will have to be turned on to process it, but the ability of the CS network hardware

    to discriminate between traffic from different connections means the processor

    does not have to awaken for every random packet that arrives at the network interface.

    _____________________________

    https://abdullahsurati.github.io/bscit

    https://abdullahsurati.github.io/bscit