60
Chapter 10 Memory Management

Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Embed Size (px)

Citation preview

Page 1: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Chapter 10Memory

Management

Page 2: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

10.1 Introduction

• Process must be loaded into memory before being executed.

• Input queue – collection of processes on the disk that are waiting to be brought into memory for execution.

• The OS manages memory by allocating and de-allocating memory to processes

2

Page 3: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Memory Management

We will discuss:• Contiguous memory allocation using

partitioning• Noncontiguous memory allocation using

paging and segments• Virtual memory

3

Page 4: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

10.2 Process Address Space

• The symbolic addresses are the addresses used in a source program. The variable names, symbolic constants and instruction labels are the basic elements of the symbolic address space.

• The compiler converts a symbolic address into a relative address.

• The physical address consists of the final address generated when the program is loaded and ready to execute in physical memory; the loader generates these addresses.

4

Page 5: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Process Address Space

• A logical address is a reference to some location in the body of a process

• The process address space is the set of logical addresses that a process references in its code.

• When memory is allocated to the process, its set of logical addresses will be bound to physical addresses.

5

Page 6: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Mapping of Logical and Physical Addresses

6

Page 7: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

10.2.1 Binding

• Binding: The association of instructions and data to memory addresses

• Can occur at any of the following steps– Compile time– Load time– Execution time

7

Page 8: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Program Phases and Addresses

8

Page 9: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Managing the Address Space

• The compiler or assembler generates the program as a relocatable object module

• The linker combines several modules into a load module• During memory allocation, the loader places the load module in

the allocated block of memory• The loader binds the logical address to a physical address• The general address translation procedure is called address

binding. • If the mapping of logical address to physical addresses is carried

out before execution time, it is known as static binding. • Dynamic binding: The mapping from logical address to physical

address is delayed until the process starts to execute.

9

Page 10: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

10.2.2 Static and Dynamic LoadingStatic loading: absolute program is loaded into memory.Dynamic loading:• Routines or modules to be used by a program are not loaded

until called• All routines are stored on a disk in relocatable form• Better memory-space utilization; unused routines are never

loaded.• Useful when large amounts of code are needed to handle

infrequently occurring cases.• No special support from the operating system is required to be

implemented through program design.

10

Page 11: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

10.2.3 Static and Dynamic Linking

Static Linking: the linker combines all other modules needed by a program into a single absolute before execution of the program.

Dynamic linking: the building of the absolute form of a program is delayed until execution time.

• Example: Dynamic Linked Libraries (DLL)• Small piece of code, stub, used to locate the appropriate

memory-resident library routine.• Stub replaces itself with the address of the routine, and executes

the routine.• Operating system needed to check if the routine is in the

processes’ memory address.

11

Page 12: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Logical vs. Physical Address Space• Logical address – generated by the compiler/assembler; also

referred to as virtual address.• Physical address – address seen by the memory unit.• Logical address space is the set of all addresses of a program• Physical address space is the set of addresses used to store the

program into memory• The logical address space is bound to a separate physical address

space

12

Page 13: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Memory-Management Unit (MMU)

• Hardware device that maps virtual to physical address.• In MMU scheme, the value in the relocation register is

added to every address generated by a user process at the time it is sent to memory.

• The user program deals with logical addresses; it never directly references the real physical addresses.

13

Page 14: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

10.3 Contiguous Memory Allocation

• Main memory is divided into several partitions• A partition is a contiguous block of memory that

can be allocated to an individual process • The degree of multiprogramming is determined

by the number of partitions in memory.

14

Page 15: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Contiguous Memory Allocation (2)

• As mentioned before, when a process completes and terminates, memory is de-allocated

• This type of memory management was used in the early OSs with multiprogramming

15

Page 16: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Multiple Partitions

• Fixed partitions (static) – the number and sizes of the partitions do not change

• Variable partitions (dynamic) – partitions are created dynamically according to:– available memory– the memory requirements of processes

16

Page 17: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

10.3.1 Fixed Partitions

• Memory is divided into fixed-sized partitions. These are not normally of the same size.

• The number and the size of the partitions are fixed. • One partition is allocated to each active process in the

multiprogramming set. • There is one special partition, the system partition, in

which the memory-resident portion of the operating system is always stored.

17

Page 18: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Fixed Partitions

18

Page 19: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Fragmentation in Fixed Partition

Fragmentation problem

• Internal fragmentation - A partition is only partially used.

• A partition is available, but not large enough for any waiting progress.

19

Page 20: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Memory Allocation Problem

• An important problem fixed partition is finding a fit between the partition sizes and the actual memory requirements of processes

• The goal is to minimize fragmentation

20

Page 21: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

10.3.2 Dynamic Partition

• The partitions are created dynamically (as needed)• The OS maintains a table of partitions allocated that

indicates which parts (location and size) of memory are available and which have been allocated.

21

Page 22: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Holes in Memory

• Hole – a contiguous block of available memory; holes of various size are scattered throughout memory.

• When a process requests memory, it is allocated memory from a hole large enough to accommodate it.

• Operating system maintains data about:– allocated partitions– Available memory blocks (holes)

22

Page 23: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Allocation with Dynamic Partitions

• At any given time, there is a list of available blocks of memory of various sizes (holes) and a queue of processes requesting memory.

• Memory is allocated contiguously to processes until there is no available block of memory large enough

23

Page 24: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Dynamic Memory Allocation

The memory manager can:• Wait until a large enough block of memory is

available, or• Skip down the queue to find a process with

smaller requirements for memory.

24

Page 25: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Holes and Allocation

• When a process is to be loaded, the OS searches for a hole large enough for this process and allocates the necessary space.

• When a process terminates, the OS frees its block of memory.

• In general, there is at any time, a set of holes, of various sizes, scattered throughout memory.

• If a new hole is adjacent to other holes, they will be merged to form one larger hole.

25

Page 26: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Memory Allocation to P7

26

Page 27: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

De-allocating Memory to P5

27

Page 28: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Advantages of Dynamic Partitions

• Memory utilization is generally better for variable-partition schemes.

• There is little or no internal fragmentation. More computer memory is sometimes allocated than is needed. For example, memory can only be provided to programs in chunks divisible by 4, 8 or 16, and as a result if a program requests perhaps 23 bytes, it will actually get a chunk of 24. This type of fragment is termed internal fragmentation

• There can be external fragmentation. External fragmentation arises when free memory is separated into small blocks and is interspersed by allocated memo

28

Page 29: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Compaction: External Fragmentation

• External fragmentation is a serious problem.• The goal of compaction is to shuffle the memory

contents to place all free memory together in one large block.

• This is only possible if relocation is dynamic (binding is done at execution time), using base and limit registers.

• Can be quite expensive (overhead).

29

Page 30: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Memory After Compaction

30

Page 31: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

31

Memory Management with Bit Maps

• Memory is divided up into allocation units, the size of unit may be as small as a few words as large as several kilobytes.

• Part of memory with 5 processes, 3 holes– tick marks show allocation units– shaded regions are free

Page 32: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

32

Trade-off:

The smaller the allocation unit, the larger the bitmap.

If the allocation unit is chosen large, the bitmap will become smaller, but the memory may be wasted in the last unit of the process if the the process size is not an exact multiple of the allocation unit.

Main problem:

When it has been decided to bring a k-unit process into memory, the memory manager must search the bitmap to find a run of k consecutive 0 bits in the map. Searching a bitmap for a run of a given length is a slow operation.

Page 33: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

33

Algorithms to allocate memory for a newly created processAssume that the memory manager knows how much memory to allocate. First fit :The memory manager scans along the list of segments until it finds a

hole that is big enough. The hole is then broken up into two pieces, one for the process and one for the unused memory. It is a fast algorithm because it searches as little as possible.

Next fit :It works the same way as first, except that it keeps track of where it is whenever it finds a suitable hole. The next time it is called to find a hole, it starts searching the list from the place where it left off last time. Simulations (Bays, 1977) show that it gives slightly worse performance than first fit.

Best fit: It searches the entire list and takes the smallest hole that is adequate. It is slower than first fit.

Page 34: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

34

Worst fit :To get around the problem of breaking up nearly exact matches into a process and tiny hole, it always takes the largest available hole, so that the hole broken off will be big enough to be useful. Simulation has shown that the worst fit is not a very good idea either.

Quick fit :It maintains separate lists for some of the more common sizes requested. e.g. a table with n entries, in which the first entry is a pointer to the head

of a list of 4-KB holes, the second entry is the a pointer to a list of 8-KB holes, the third entry a pointer to 12-KB holes.

Finding a hole of required size is fast. It has the same disadvantage as all schemes that sort by hole size, when a

process terminates or is swapped out, finding its neighbor to see if a merge is possible is expensive.

Page 35: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

35

Memory Management with Linked Lists

Linked list of allocated and free memory segments

• The segment list is kept sorted by address. Sorting this way has advantage that when a process terminates or is swapped out, updating the list is straightforward.

Page 36: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

10.3.3 Swapping

• A process can be swapped temporarily out of memory to secondary storage, and then loaded into memory again to resume execution.

• Secondary storage – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images.

36

Page 37: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Swapping

• Roll out, roll in – swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed.

• Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped.

• Modified versions of swapping are found on many systems, i.e., UNIX and Microsoft Windows.

37

Page 38: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

10.4 Non-contiguous Memory Allocation

Used in modern Operating Systems• Paging• Segmentation

38

Page 39: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Pages

• A page is a unit of logical memory of a program• A frame is a unit of physical memory• All pages are of the same size• All frames are of the same size• A frame is of the same size as a page

39

Page 40: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Paging

• Physical memory is divided into fixed-sized blocks called frames (size is power of 2 ).

• Logical memory is divided into blocks of same size called pages.

• A page of a program is stored on a frame, independently of other pages

• A logical address on the page is converted to a physical address on the corresponding frame

40

Page 41: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

41

Page 42: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Paging(2)

• The OS keeps track of all free (available) frames, and allocated frames in the page table.

• To run a program of size n pages, the OS needs n free frames to load program.

• The OS sets up a page table for every process• The page table is used for converting logical addresses

to physical addresses. • There is a small amount of internal fragmentation.

42

Page 43: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Memory Allocation with Paging

• The frames allocated to the pages of a process need not be contiguous; in general, the system can allocate any empty frame to a page of a particular process.

• There is no external fragmentation • There is potentially a small amount of internal

fragmentation that would occur on the last page of a process.

43

Page 44: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Logical vs Physical Memory

• Logical memory corresponds to the user’s view of memory

• Physical memory is the actual contents and addresses of memory

• The user’s view of memory is mapped onto physical memory

• This mapping is done on every reference to logical memory

44

Page 45: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Logical Address

• Any address referenced in a process is defined by – the page that the address belongs to, and– the relative address within that page.

• A logical address of a process consists of a page number and an offset.

45

Page 46: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Logical Address (2)

Address generated by the compiler/assembler is divided into:

• Page number (p) – used as an index into a page table (which contains base address of each page in physical memory).

• Page offset (d) – the relative address in the page. • This pair of numbers will be converted to the physical

memory address that is sent to the memory unit.

46

Page 47: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Example of a Logical Address

47

Page 48: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Memory Reference

What is a memory reference?

48

Page 49: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Physical Address

• When the system allocates a frame to a page, it translates this logical address into a physical address that consists of a frame number and the offset.

• For this, the system needs to know the correspondence of a page of a process to a frame in physical memory and it uses a page table

49

Page 50: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Example of a Physical Address

50

Page 51: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

10.4.1.2 Address Translation

51

Page 52: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Page Table Example

52

Page 53: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

53

Example of how the mapping works.

Virtual addresses: 16-bit (0 – 64KB)

Physical memory: 64KB

User program can be up to 64KB, but it cannot be loaded into memory entirely and run.

The virtual address space is divided into units called pages.

The corresponding units in physical memory are called page frames.

The pages and frame pages are always the same size. 4KB (512B – 64KB in real system)

8 frame pages, 16 virtual pages

e.g. MOV REG, 0

it is transformed into (by MMU)

MOV REG, 8192

Page 54: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

54

(24567-28671)

(8192-12287)

(0-4095)(4196-8191)

e.g. MOV REG, 8192

is transformed into

MOV REG, 24576

In the actual hardware, a Present/absent bit keeps track of which pages are physically present in memory.

Page 55: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

55

Page fault: Fault that occurs as the result of an error when a process attempts to access a nonresident page, in which case the OS can load it from disk.

e.g. MOV REG, 32780

(12-th byte within virtual page 8)

(1) MMU notices that the page is unmapped and causes CPU to trap to OS.

(2) OS picks a little-used page frame and writes back to the disk.

(3) Then it fetches the page just referenced into frame page just freed.

(4) Change the map and restart the trapped instruction.

Page 56: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

56

Page Tables

Internal operation of MMU with 16 4 KB pages

Page table: Table that stores entries that map page numbers to page frames. A page table contains an entry for each of a process’s virtual pages.

e.g. 16-bit address:

High-order 4 bits: virtual page number.

Low-order 12 bits: offset

8196 is transformed into 24580 by MMU.

Page 57: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Implementation of Page Table

• A page table is kept in main memory.• Page-table base register (PTBR) points to the page table.• Page-table length register (PRLR) indicates size of the

page table.• In this scheme every data/instruction access requires two

memory accesses. One for the page table and one for the data/instruction.

57

Page 58: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Improving Memory Access

• The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative registers or translation look-aside buffers (TLBs)

58

Page 59: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Memory Protection• Every memory reference causes a page table lookup to

get the appropriate frame number• Memory protection implemented by associating

protection bit with each frame.• Valid-invalid bit attached to each entry in the page

table:– “valid” indicates that the associated page is in the

process’ logical address space, and is thus a legal page.

– “invalid” indicates that the page is not in the process’ logical address space.

59

Page 60: Chapter 10 Memory Management. 10.1 Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the

Valid/Invalid Bit

60