View
216
Download
1
Tags:
Embed Size (px)
Citation preview
OS Fall’02
Background A compiler creates an executable code An executable code must be brought into
main memory to runOperating system maps the executable code onto the main memoryThe “mapping” depends on the hardware
Hardware accesses the memory locations in the course of the execution
OS Fall’02
Memory management Principal operation: Bringing
programs into the memory for execution by the processor
Requirements and issues:RelocationProtectionSharingLogical and physical organization
OS Fall’02
Relocation Compiler generated addresses are
relative A process can be loaded at different
memory location, swapped in/outActual physical mapping is not known at compile time
Simple if processes are allocated contiguous memory ranges
Complicated if paging is used
OS Fall’02
Protection Processes must be unable to
reference addresses of other processes or those of the operating system
Cannot be enforced at compile time (relocation)
Each memory access must be checked for validity
Enforced by hardware
OS Fall’02
Sharing Protection must be flexible enough
to allow controlled sharing of the same portion of the main memory
E.g., all running instances of a text editor can share the same code
OS Fall’02
Logical organization Main memory is organized as a linear
one-dimensional address space Programs are typically organized into
modulescan be written/compiled independently, have different degrees of protection, shared
A mechanism is desirable for supporting a logical structure of the user programs
OS Fall’02
Physical organization Two level hierarchical storage
Main memory: fast, expensive, scarce, volatileSecondary storage: slow, cheap, abundant, persistent
Memory management involves bi-directional information flow between the two
OS Fall’02
Memory management techniques
Partitioningfixed, dynamicnow obsoleteuseful to demonstrate basic principles
Paging and segmentation Paging and segmentation with
virtual memory
OS Fall’02
Fixed partitioning Main memory is divided into a
number of static partitionsall partitions of the same sizea collection of different sizes
A process can be loaded into a partition of equal or greater size
A process cannot be scattered among many partitions
OS Fall’02
Fixed partitioningOperating System
8 M
12 M
8 M
8 M
6 M
4 M
2 M
8 M
8 M
8 M
8 M
8 MOperating System
Internal fragmentation
wasted space is internal to an allocated region
Executing big programs requires explicit memory management by the programmer
overlays
OS Fall’02
Replacement algorithms If all partitions are occupied some
process should be swapped out Select a process that
occupies the smallest partitions that will hold the incoming processblocked process
OS Fall’02
Dynamic partitioning Partitions created dynamically Each process is loaded into a
partition of exactly the same size as that process
OS Fall’02
Dynamic partitioning example
Operating System
128 K
896 K
Operating System
Process 1 320 K
576 K
Operating System
Process 1 320 K
Process 2 224 K
352 K
OS Fall’02
Dynamic partitioning example
Operating System
Process 1 320 K
Process 2
Process 3
224 K
288 K
64 K
Operating System
Process 1 320 K
Process 3
224 K
288 K
64 K
Operating System
Process 1 320 K
Process 3 288 K
64 K
Process 4 128 K
96 K
OS Fall’02
Dynamic partitioning example
Operating System
320 K
Process 3 288 K
64 K
Process 4 128 K
96 K
Operating System
Process 3 288 K
64 K
Process 4 128 K
96 K
Process 2 224 k
96 K
OS Fall’02
External fragmentation Situation when the memory which is
external to the allocated partitions becomes fragmented
Can be reduced using compactionwastes the processor time
OS Fall’02
Placement algorithms Free regions are organized into a linked
list ordered by the memory addressessimplifies coalescing of free regionsincreases insertion time
First-fit: allocate the first region large enough to hold the process
optimization: use a roving pointer: next-fit
Best-fit: allocate the smallest region which is large enough to hold the process
OS Fall’02
An example
Lastallocatedblock (14K)
BeforeAfter
8K 8K
12K 12K
22K
18K
6K 6K
8K 8K
14K 14K
6K
2K
36K20K
Next Fit
Free block
Allocated block
Best Fit
First Fit
OS Fall’02
Placement algorithms discussion
First-fit:efficient and simpletends to create more fragmentation near the beginning of the list solved by next-fit
Best-fit:less efficient (might search the whole list)tends to create small unusable fragments
OS Fall’02
Buddy systems Approximates the best-fit principle Efficient search
first (best) fit has a linear complexity
Simplified coalescing Several variants exist
Binary buddy system allocates memory in blocks which are powers of 2
OS Fall’02
Binary buddy system Memory blocks are available in size
of 2K where L <= K <= U and where2L = smallest size of block allocated2U = largest size of block allocated(generally, the entire memory available)
Initially, all the available space is treated as a single block of size 2U
OS Fall’02
Creating buddies A request to allocate a block of size s:
If 2U-1 < s <= 2U, allocate the entire blockOtherwise, split the block into two equal buddies of size 2U-1
If 2U-2 < s <= 2U-1, then allocate the request to one of the buddies, otherwise split furtherProceed until the smallest block is allocated
OS Fall’02
Maintaining buddies Existing non-allocated buddies
(holes) are kept on several listsHoles of size 2i are kept on the ith listHole is removed from the (i+1)th list by splitting it into two bodies and putting them onto the ith listWhenever two adjacent buddies on the ith list are freed they are coalesced and moved to the (i+1)th list
OS Fall’02
Finding a free buddy
get_hole(i):if (i==U+1) then failure;if (list i is empty) {
get_hole(i+1);split hole into buddies;put buddies on list i;
}take first hole on list i;
OS Fall’02
Remarks Buddy system is a reasonable
compromise between the fixed and dynamic partitioning
Internal fragmentation is a problemExpected case is about 28% which is high
BS is not used for memory management nowadays
Used for memory management by user level libraries (malloc)
OS Fall’02
Relocation with partitioning
Interrupt tooperating system
Process image inmain memory
Relative address
Absoluteaddress
Text
Data
Stack
Adder
Comparator
Base Register
Bounds Register
OS Fall’02
Paging The most efficient and flexible
method of memory allocation Process memory is divided into fixed
size chunks of the same size, called pages
Pages are mapped onto frames in the main memory
Internal fragmentation is possible with paging (negligible)
OS Fall’02
Paging support Process pages can be scattered all
over the main memory Process page table maintains
mapping of process pages onto frames
Relocation becomes complicatedHardware support is needed to support translation of relative addresses within a program into the memory addresses
OS Fall’02
Paging example01
2
3
4
5
6
7
8
9
10
11
12
13
14
A.0
A.1
A.2
A.3
C.0
C.1
C.2
C.3
D.0
D.1
D.2
D.3
D.4
0
1
2
3
Process A
0
1
2
3
0
1
2
Process B
---
---
---
0
1
2
3
4
0
1
2
3
Process C
7
8
9
10
4
5
6
11
12
Process D
Free Frame List
13
14
OS Fall’02
Address translation Page (frame) size is
a power of 2with page size = 2r, a logical address of l+r bits is interpreted as a tuple (l,r)l = page number, r = offset within the page
Page number is used as an index into the page table
OS Fall’02
Hardware support
Program Paging Main Memory
Logical address
Register
Page Table
PageFrame
Offset
P#
Frame #
Page Table Ptr
Page # Offset Frame # Offset
+
OS Fall’02
Segmentation A program can be divided into segments Segments are of variable size Segments reflect the logical (modular)
structure of a program Text segment, data segment, stack segment…
Similar to dynamic partitioning except segments of the same process can be scattered
subject to external fragmentation
OS Fall’02
Address translation
Maximum segment size is always a power of 2 Process’ segment table maps segment
numbers into their base addresses in the memory
With the maximum segment size of 2r, a logical address of l+r bits is interpreted as a pair (l,r):
l = segment number, r = offset within the segment
l is used as an index into the segment table
OS Fall’02
Hardware support
Base + d
Program Segmentation Main Memory
Virtual Address
Register
Segment Table
Seg
men
t
d
S#
Length Base
Seg Table Ptr
Seg # Offset = d
Segment Table
+
+
OS Fall’02
Remarks The price of paging/segmentation is a
sophisticated hardware But the advantages exceed by far
Paging decouples address translation and memory allocation
Not all the logical addresses are necessary mapped into physical memory at every given moment - virtual memory
Paging and segmentation are often combined to benefit from both worlds