Upload
others
View
6
Download
0
Embed Size (px)
Citation preview
Memory Management F2007/Unit6/1
UNIT 6
OBJECTIVES
General Objective:To understand the basic memory management of operatingsystem
Specific Objectives: At the end of the unit you should be able to:
define the memory management
list the objectives of memory management in operating system
explain the virtual memory and implementation concept
explain the relocation policy in memory management
Memory Management F2007/Unit6/2
6.0 Introduction
Effective memory management is vital in a multiprogramming system. If
only a few processes are in memory, then for much of the time of the
processes will be waiting for input output and the processors will be idle.
Thus, memory needs to be allocated efficiently to pack as many processors
into memory possible.
6.1 Objectives
While surveying the various mechanisms and policies associated with
memory management, it is well to keep in mind the requirements that
memory management is intended to satisfy. It suggests five requirements:
Relocation
Protection
Sharing
Logical organization
Physical organization
INPUT
Memory Management F2007/Unit6/3
6.1.1 Relocation
In a multiprogramming system, the available main memory is
generally shared among a number of processes. Typically it is not possible
for programmer to know in advance which are the programs that will
reside in the memory during the execution time of a program. In addition
we would like to be able to swap active processes in and out of main
memory to maximize processors usage by providing a large pool of ready
processes to execute. Once a program has been swapped to disk, it would
be quite limiting to declare that when it is next swapped back in it must be
placed in the same main memory region as before.
Thus, we cannot know ahead of time where a program would be
placed, and we must allow it to be moved about in main memory as a
result of swapping. This fact raise some technical concern related to
addressing, as illustrated in figure 6.1, which depicts the process images.
For simplicity, let us assume that the process image occupies a contiguous
region of main memory. Clearly the operating system will need to know
the location of the process control information, the execution stack, as
well as the entry point to begin the execution of the program for this
process. As the operating system manages memory and is responsible for
bringing this process into main memory, these addresses are easy to come
by. In addition, however, the processor must deal with memory references
within the program. Branch instructions must contain an addressed to
reference the instruction to be executed next. Data-reference instructions
must contain the address of the byte or word of data referenced. Some
how, the processor hardware and operating system software must be able
Memory Management F2007/Unit6/4
to translate the memory references found in the code of programs into
actual physical memory addresses that reflected the current location of the
program in main memory.
Figure 6.1
Addressing requirements
for a process
(Source: Stalling, William
(1995) Operating System)
6.1.2 Protection
Each process should be protected against unwanted interference by
other processes, whether accidental or intentional. Thus, programs in
other processes should not be able to reference memory locations in a
process, for reading and writing purposes without permission. In one
sense, satisfaction of the relocation requirement increases under difficulty
of satisfying the protection requirement. Because the location of a
program in main memory is unknown, it is possible to check absolute
Process control Block
Program
Data
Stack
ointam
Branchinstruction
Reference toData
Memory Management F2007/Unit6/5
addresses at compile time to assure protection. Furthermore, most
programming languages allow the dynamic calculation of addresses at run
time, for example by computing an array subscript or a pointer into a data
structure. Hence, all memory references generated by a process must be
checked at run time to ensure that they refer only to the memory space
allocated to that process. Fortunately, as we shall see mechanisms that
support relocation also form the base for satisfying the protection
requirements. The process image layout in figure 6.1 illustrates the
protection requirement. Normally, a user process cannot access any
portion of the operating system, either program or data. Again, a program
in one process cannot branch to uninstruction in another process. And
without special arrangement a program in one process cannot process the
data area of another process. The processor must be able to abort such
instruction at the point of execution.
Note that, in terms of our example, the memory protection requirement
must be satisfied by the processors (hardware) rather than the operating
system (software). This is because the operating system cannot anticipate
all the memory references that a program will make. Even if such
anticipation were possible, it would be prohibitively time consuming to
screen each program in advance for possible memory reference violation.
Thus, it is possible to access only the permissibility of a memory
reference (data access or branch) at the time of execution of the
instruction making the reference. To accomplish this, the processor must
have that capability.
6.1.3 Sharing
Memory Management F2007/Unit6/6
Any protection mechanisms that are implemented must have the
flexibility to allow several processors to access the same portion of main
memory. For example, if a number of processes are executing the same
program, it is advantageous to allow each process to access the same copy
of the program rather than have it on separate copy. Processes that are
cooperating on some task may need to share access to the same data
structure. The memory management system must therefore allow control
access to shared areas of memory without compromising essential
protection. Again, we shall see that the mechanism use to support
relocation from the base for sharing capabilities.
6.1.4 Logical Organization
Almost invariably, main memory in a computer system is organized as
a linear, or one-dimensional, address space that consists of sequence of
byte or words. Secondary memory, at its physical level, is similarly
organized. Although the organization closely mirrors the actual machine
hardware, it does not correspond to the way in which program are
typically instructed. Most programs are organized into modules, some of
which unmodifiable (read-only, execute only) and some of which contain
data that may be modified in the operating system and computer hardware
can effectively deal with user programs and data in the form of modules
of some sort, then a number of advantages can be identified as follows:
1. Modules can be written and compiled independently, with all
references one module to another resolved by the system at run time.
Memory Management F2007/Unit6/7
2. With modest additional overhead, different degrees of protection
(read-only, execute only) can be given to different modules.
3. It is possible to introduce mechanism by which modules can be shared
among processes. The advantage of providing sharing on a module
level is that this corresponds to the user’s way of viewing the problem
and hence it is easy for the user to specify the sharing that is desired.
The tool that most readily satisfies these requirements is segmentation,
which is one of the memory management techniques explored in this
chapter.
6.1.5 Physical Organization
Computer memory is organized into at least two levels: main
memory and secondary memory. Main memory provides fast access at
relatively high cost. In addition, main memory is volatile; that is, it does
not provide permanent storage. Secondary memory is slower and cheaper
than main memory, and it is usually not volatile. Thus, secondary
memory’s large capacity can be provided to allow long term storage of
programs and data, while a smaller main memory holds programs and
data currently in use.
In this two level scheme, the organization of the flow of information
between main and secondary memory is a major system concern. The
responsibility for this flow could be assigned to the individual
programmer, but this is impractical and undesirable for two reasons:
Memory Management F2007/Unit6/8
1. The main memory available for a program plus its data may be
insufficient. In that case, the programmer must engage in a practice
known as overlaying, in which the program and data are organized in
such a way that various modules can be assigned the same region of
memory, with a main program responsible for switching the modules
in and out as needed. Even with the aid of compiler tools, overlay
programming wastes programmer time.
2. In a multiprogramming environment, the programmer does not know
at the time of coding how much space will be available or where that
space will be.
It is clear then, that the task of moving information between the two levels
of memory should be a system responsibility. This task is the essence of
memory management.
ACTIVITY 6A
TEST YOUR UNDERSTANDING BEFORE YOU CONTINUE THE
NEXT INPUT....!
Memory Management F2007/Unit6/9
6.1 Give five objectives of memory management.
6.2 Give three advantages of logical organization.
6.3 How many levels that computer memory can be organized?
FEEDBACK TO ACTIVITY 6A
6.1
Relocation
Protection
Memory Management F2007/Unit6/10
Sharing
Logical organization
Physical organization
6.2
1. Modules can be written and compiled independently, with all references
one module to another resolved by the system at run time.
2. With modest additional overhead, different degrees of protection (read-
only, execute only) can be given to different modules.
3. It is possible to introduce mechanism by which modules can be shared
among processes. The advantage of providing sharing on a module level
is that this corresponds to the user’s way of viewing the problem and
hence it is easy for the user to specify the sharing that is desired
6.3 Computer memory is organized into two level:
i. main memory
ii. secondary memory
6.2 Virtual memory concept
INPUT
Memory Management F2007/Unit6/11
Many years ago people were first confronted with programs that were too
big to fit in the available memory. The solution usually adopted was to split the
program into pieces, called overlays. Overlay 0 would start running first. When
it was done, it would call another overlay. Some overlay systems were highly
complex, allowing multiple overlays in memory at once. The overlays were kept
on the disk and swapped in and out of memory by the operating system.
Although the actual work swapping overlays in and out was done by the
system, the work of splitting the program into pieces had to be done by the
programmer. Splitting up large programs into small, modular pieces was time
consuming and boring. It did not take long before someone thought of a way to
turn the whole job over to the computer.
The method that was devised (Fortheringham, 1961) has come to be known
as virtual memory. The basic idea behind virtual memory is that the combine
size of the program, data and stack may exceed the amount of physical memory
available for it. The operating system keeps those parts of the program currently
in use in main memory, and the rest on the disk. For example, a 1M program can
run on a 256K machine by carefully choosing which 256K to keep in memory at
each instant, with pieces of the program being swapped between disk and
memory as needed.
Virtual memory can also work in a multiprogramming system. For example
eight 1M programs can each be allocated a 256K partition in a 2M memory,
which each program operating as though it had its own, private 256K machine.
In fact virtual memory multiprogramming fit together very well. While a
program is waiting for part of itself to be swapped in, it is waiting for I/O and
cannot run so the CPU can be given to another process.
Memory Management F2007/Unit6/12
6.3 Virtual memory implementation
Virtual memory can be implementing using paging and segmentation
techniques as stated below:
6.3.1 Paging technique
The main problem of contagious allocation is external
fragmentation. This is overcome in the present scheme. Here a process is
allocated the physical memory where ever it is available, and this scheme
is call as paging scheme.
In the basic method physical memory is broken into fix size block
call frame. The logical memory also broken into block of the same size
called pages.
Every address generate by the CPU is divided into parts: a page
number (P) and a page offset (d). The page number p is use as an index
into a page table. The page table contains the base address of each page
lying in physical memory. The base address read from page table is
combining with page offset (d) to generate the physical memory address.
The page size generally varies from 512 bytes to 8192 bytes
depending upon the hardware design. If the size of logical address space is
2M and a page size is 2M addressing unit (bytes or word) then the high
order (m-n) bits of logical address designate the page number and the n
Memory Management F2007/Unit6/13
low order bit designate the page offset. Thus the logical address will be P
= (m-n) and d=n.
The advantage of paging scheme is that there is no external
fragmentation however has some internal fragmentation. This is because
the last page allocated may not be the exact boundary of the process
memory requirement. In worst case there are n pages of memory wasted
by n process.
An important aspect of paging scheme is the lack of user view of
memory. The program is scattered throughout the physical memory. The
logical addresses are translated to physical addresses. The another scheme
segmentation is discussed further.
6.3.2 Segmentation techniques
The program and its associated data are divided into a number of
segments. It is not required that all segments of all programs be of the same
length, although that is maximum segment length. As with paging, a logical
address sing segmentation consist of two parts in this case a segment number
and an offset.
Because of the use of unequal size segment, segmentation is similar to dynamic
partitioning. In the absence of an overlay scheme or the use of virtual memory, it
would require that all of a program’s segments be loaded into memory for
execution. The different, compared with dynamic partitions, is that with
segmentation a program may occupy more than one partition, and this partitions
need not be contiguous. Segmentation eliminates internal segmentation, but like
dynamic partitioning it suffers from external fragmentation. However, because a
Memory Management F2007/Unit6/14
process is broken up into a number of smaller pieces the external fragmentation
should be less.
Where as paging is invisible to the programmer, segmentation is usually visible
and is provided as inconvenient for organizing programs and data. Typically, the
programmer of the compiler assigns programs and data to different segment. For
purposes of modular programming the program or data may be further broken
down into multiple segments. The principal inconvenience of this service the
programmer must be aware of the maximum size limitation on segments.
Another consequence unequal size segments is that there is no simple
relationship between logical addresses and physical addresses. Analogous to
paging, a simple segmentation scheme would make use of a segment table for
each process and a list of free block in main memory. Each segment table entry
would have to give the starting address in main memory of the corresponding
segments. The entry should also provide the length of the segment to assure that
the valid addresses are not use.
6.4 Relocation policy
Before we consider ways of dealing with the shortcomings of partitioning,
we must clear up one loose end, which relates to the placement of processes in
memory. When the fix partition scheme is used, we can expect that a process
will always be a sign to the same partition. That is, the partition that is selected
Memory Management F2007/Unit6/15
when a new process is loaded will always be used to swapped the process back
into memory after it has been swapped up. When the process is first loaded all
relative memory references in the code are replaced by absolute main memory
addresess determine by the base address of the loaded process.
In the case of equal size partitions and in the case of a single process
queue for unequal size partitions, a process may occupied different partitions
during the course of its life. When a process image is first created, it is loaded
into some partitions in main memory. Later, the process may be swapped out;
when it is subsequently swapped back in, it may be assigned to a partition
different from the previous one. The same is true for dynamic partitioning.
Now, consider that a process in memory include instructions plus data.
The instructions will contain memory references of the following two types;
Addresses of data items, used in load and store instructions and
some arithmetics and logical instructions.
Addresses of instructions, used for branching and called
instructions.
But now we see that this addresses are not fixed. They change each
time of process is swapped in or shifted. To solve this problem, a
distinction is made among several types of addresess. A logical address is
reference to a number location independent of the current assignment of
data to memory; a translation must be made to a physical address before
the memory access can be achieved. A relative address is particular
example of logical address, in which the address is express as the location
relative to some known point, usually the beginning of a program. A
Memory Management F2007/Unit6/16
physical address, or absolute address, is unactual location in main
memory.
Program that employ relative addresses in memory are loaded using
dynamic run-time loading. This means that all the memory references in
the loaded process are relative to the origin of the program. Thus, a mean
is needed in hardware of translating relative addresses to physical main
memory at the time of execution of the instruction that contains the
reference.
6.4.1 Non-segmentation system (best fit, worst fit, first fit)
Because memory compaction is time consuming it behooves the
operating system designer to be clever in deciding how to assign process
to memory (how to plug the holes). When it is time to load or swapp a
process into main memory and if there is more than one free block of
memory of sufficient size, then the operating system must decide which
free block to allocate.
Three placement algorithms that can be considered are best fit, first
fit and worst fit. All are limited to choosing among free blocks of main
memory that are equal to or larger than the process to be brought in. Best
fit chooses the block that is closest in size to the request. First fit begins to
scan memory from the beginning and chooses the first available block that
is large enough. Worst fit begin to scan memory from the location of the
last placement and chooses the next available block that is large enough.
Memory Management F2007/Unit6/17
Figure 6.2 a shows an example memory configuration after the
number of placement and swapping out operation. The last block that was
used was a 22KB block from which a 14KB partition was created. Figure
6.2 b shows the different between the best, first and worst fit placement
algorithm in satisfying a 16KB allocation request. Best fit will search the
entire list of available blocks and make use of the 18KB block, leaving a
2KB fragement. First fit results in a 6KB fragement and worst fit result in
a 20KB fragement.
Which of this approaches is best will depend on the exact sequence
of process swapping that occurs and the size of those processors. The first
fit algorithm is not only the simplest but also the best and fastest as well.
The worst fit algorithm tend to produced slightly worse result than the
first fit. The worst fit algorithm will more frequently lead to an allocation
from a free block at the end of memory. The result is that the largest block
of free memory, which usually appears at the end of the memory space, is
quickly broken up into small fragement. Thus, compaction may be require
more frequently with worst fit. On the other hand, the first fit algorithm
may litter the front end with small free partition that need to be searched
over on each subsequent first fit pass. The best fit algorithm, despite its
name, it is usually the worst performer. Because this algorithm looks the
smallest block that will satisfy the requirement, it guarantees that the
fragement left behind is as small as possible. Although each memory
request always wastes the small amount memory the result that main
memory is quickly littered by blocks too small to satisfy requests for
memory allocation. Thus memory compaction must be done more
frequently than the other algorithms.
Memory Management F2007/Unit6/18
( a)
8K
12K
22K
18K
8K
6K
14K
36K
llocated(14K)
Allocated Block
Free block
8K
12K
6K
Operating system F2007/Unit6/19
(b)
Memory configuration before and after allocation of a 16KB block
(Source: Stalling, William( 1995) Operating Systems)
6.4.2 Segmentation System (LRU, LFU, FIFO)
As pointed out earlier, in paging scheme the user’s view of memory
is not the same as the actual physical memory. The users view memory as
a collection of few segment with variable size and not necessary any order
among segments.
8K
6K
14K
20K
Allocated Block
Free block
Best fit
Worst fit
Operating system F2007/Unit6/20
Consider the simple situation when you are writing a program. You
write a main program with a set of sub routines, function etc. You may
use stack arrays, table, referred to by name and do not care where they are
stored. Elements in a segment are identified by their offset from beginning
of the segment like the first statement of program, the fifth instruction of
the square root function.
The memory management scheme using segmentation support the
user view of memory. The logical address space is a collection of segment
each segment has a name and a length. Addresses specify both the
segment name and the offset within the segment. The users specify the
segment name and an offset also segment can be numbered and referred to
by it.
Similar to page table a segment table can be kept in fast registers,
because it can be quickly referred. However if it is kept in memory then
the mapping requires two memory references for each logical address,
thus slowing down the computer.
To improved speed, set of associative registers are use to hold most
recently used segment table entries, which reduce 10 – 15 % time.
Advantage of segmentation is one can associate protection with the
segment like instruction segment can be read only.
Another advantage is of sharing of code/data programs like, editors
etc could be shared and only are copy is needed. Segmentation may cause
Operating system F2007/Unit6/21
external fragmentation, causing a process to wait until a larger hole is
available.
ACTIVITY 6B
6.4 Fill in the blanks with the suitable answers given below
Operating system F2007/Unit6/22
a. Many years ago, people confronted with programs that were too big to
fit in the available memory. The solutions were called
__________________.
b. The method that was devised (Fortheringham, 1961) is known as
______________________.
c. The basic idea behind virtual memory is that to combine
______________, _____________________.
size of program data and stack
virtual memory overlay
6.5 Try to guessed the virtual memory implementation below:
a.
g T
b.
N H
FEEDBACK TO ACTIVITY 6B
6.4 a Overlay
b. virtual memory
Operating system F2007/Unit6/23
c. Size of program, data and stack
6.5
a.
P A G I N G T E C N I Q U E S
b.
S E G M E N T A T I O N T E C H N I Q U E S
SELF- ASSESSMENT 1
You are approaching success. Try all the questions in this self-assessment
section and check your answers with those given in the Feedback on Self-
Operating system F2007/Unit6/24
Assessment 1 given on the next page. If you face any problems, discuss it your
lecturer. Good luck!!!
Question 6-1
a. Discuss the logical organization in the objectives of memory
management in operating system?
b. What is the importance of relocation and protection in memory
management?
SELF ASSESSMENT 2
Question 6-2
Operating system F2007/Unit6/25
a. Describe the implementation of virtual memory techniques?
b. Explain the non segmentation and segmentation system in memory
management?
FEEDBACK TO SELF-ASSESSMENT 1
Question 6-1
Please refer to the input given and discuss with your lecturer.
Operating system F2007/Unit6/26
FEEDBACK TO SELF-ASSESSMENT 2
Question 6-2
Please refer to the input given and discuss with your lecturer.