210
Chapter 8, Main Memory 1

Chapter 8, Main Memory

  • Upload
    malana

  • View
    29

  • Download
    0

Embed Size (px)

DESCRIPTION

Chapter 8, Main Memory. 8.1 Background. When a machine language program executes, it may cause memory address reads or writes From the point of view of memory, it is of no interest what the program is doing - PowerPoint PPT Presentation

Citation preview

Page 1: Chapter 8, Main Memory

1

Chapter 8, Main Memory

Page 2: Chapter 8, Main Memory

2

8.1 Background

• When a machine language program executes, it may cause memory address reads or writes

• From the point of view of memory, it is of no interest what the program is doing

• All that is of concern is how the program/operating system/machine manage access to the memory

Page 3: Chapter 8, Main Memory

3

Address binding

• The O/S manages an input queue in secondary storage of jobs that have been submitted but not yet scheduled

• The long term scheduler takes jobs from the input queue, triggers memory allocation, and puts jobs into physical memory

• PCB’s representing the jobs go into the scheduling system’s ready queue

Page 4: Chapter 8, Main Memory

4

• The term memory address binding refers to the system for determining how memory references in programs are related to the actual physical memory addresses where the program resides

• In short, this aspect of system operation stretches from the contents of high level language programs to the hardware the system is running on

Page 5: Chapter 8, Main Memory

5

Variables and memory

• 1. In high level language programs, memory addresses are symbolic.

• Variable names make no reference to an address space in the program

• But in the compiled, loaded code, the variable name is associated with a memory address that doesn’t change during the course of a program run

• This memory location is where the value of the variable is stored

Page 6: Chapter 8, Main Memory

6

Relative memory addresses

• 2. When a high level language program is compiled, typically the compiler generates relative addresses.

• This means that the loaded code contains address references into the data and code space starting with the value 0

• Instructions which have variable operands, for example, refer to the variables in terms of offsets into the allocated memory space beginning at 0

Page 7: Chapter 8, Main Memory

7

Loader/linkers

• 3. An operating system includes a loader/linker.

• This is part of the long term scheduler functionality.

• When the program is placed in memory, assuming (as is likely) that its base load address is not 0, the relative addresses it contains don’t agree with the physical addresses is occupies

Page 8: Chapter 8, Main Memory

8

Resolving relative addresses to absolute addresses

• A simple approach to solving this problem is to have the loader/linker convert the relative addresses of a program to absolute addresses at load time.

• Absolute addresses are the actual physical addresses where the program resides

Page 9: Chapter 8, Main Memory

9

• Note the underlying assumptions of this scenario

• 1. Programs can be loaded into arbitrary memory locations

• 2. Once loaded, the locations of programs in memory don’t change

Page 10: Chapter 8, Main Memory

10

Compile time address binding

• There are several different approaches to binding memory access in programs to actual locations

• 1. Binding can be done at compile time• If it’s known in advance where in memory a

program will be loaded, the compiler can generate absolute code

Page 11: Chapter 8, Main Memory

11

Load time address binding

• 2. Binding can be done at load time• This was the simple approach described

earlier• The compiler generates relocatable code• The loader converts the relative addresses to

actual addresses at the time the program is placed into memory.

Page 12: Chapter 8, Main Memory

12

Run time address binding

• 3. Binding can be done at execution time• This is the most flexible approach• Relocatable code (containing relative

addresses) is actually loaded• At run time, the system converts each relative

memory address reference to a real, absolute address

Page 13: Chapter 8, Main Memory

13

• Implementing such a system removes the restriction that a program is always in the same address space

• This kind of system supports advanced memory management systems like paging and virtual memory, which are the topics of chapters 8 and 9, on memory

Page 14: Chapter 8, Main Memory

14

• In simple terms, you see that run time, or dynamic address binding supports medium term scheduling

• A job can be offloaded and reloaded without needing either to reload it to the same address or go through the address binding process again

Page 15: Chapter 8, Main Memory

15

• The diagram on the following overhead shows the various steps involved in getting a user written piece of high level code into a system and running

Page 16: Chapter 8, Main Memory

16

Page 17: Chapter 8, Main Memory

17

Logical vs. physical address space

• The address generated by a program running on the CPU is a logical address

• The address that actually gets manipulated in the memory management unit of the CPU—that ends up in the memory management unit memory address register—is a physical address

• Under compile time or load time binding, the logical and physical addresses are the same

Page 18: Chapter 8, Main Memory

18

• Under run time/execution time binding, the logical and physical addresses differ

• Logical addresses can be called virtual addresses.

• The book uses the terms interchangeably• However, for the time being, it’s better to refer

to logical addresses, so you don’t confuse this concept with the broader concept of virtual memory, the topic of Chapter 9

Page 19: Chapter 8, Main Memory

19

• Overall, the physical memory belonging to a program can be called its physical address space

• The complete set of possible memory references that a program would generate when running can be called its logical address space (or virtual address space)

Page 20: Chapter 8, Main Memory

20

• For efficiency, memory management in real systems is supported in hardware

• The mapping from logical to physical is done by the memory management unit (MMU)

• In the simplest of schemes, the MMU contains a relocation register

• Suppose you are doing run time address binding

Page 21: Chapter 8, Main Memory

21

• The MMU relocation register contains the base address, or offset into main memory, where a program is loaded

• Converting from a relative address to an absolute address means adding the relative address generated by the running program to the contents of the relocation register

Page 22: Chapter 8, Main Memory

22

• When a program is running, every time an instruction makes reference to a memory address, the relative address is passed to the MMU

• The MMU is transparent. • It does everything necessary to convert the

address

Page 23: Chapter 8, Main Memory

23

• For a simple read, for example, given a relative address, the MMU returns the data value found at the converted address

• For a simple write, the MMU takes the given data value and relative address, and writes the value to the converted address

• All other memory access instructions are handled similarly

• An illustrative diagram of MMU functionality follows

Page 24: Chapter 8, Main Memory

24

Memory management unit functionality with relative addresses

Page 25: Chapter 8, Main Memory

25

• Although the simple diagram doesn’t show it, logical address references generated by a program can be out of range

• In principle, these would cause the MMU to generate out of range physical addresses

• However, the point is that under relative addressing, the program lives in its own virtual world

Page 26: Chapter 8, Main Memory

26

• The program deals only in logical addresses while the system handles mapping them to physical addresses

• It will be shown shortly how the possibility of out of range references can be handled by the MMU

Page 27: Chapter 8, Main Memory

27

• The previous discussion illustrated addressing in a very basic way

• What follows are some historical enhancements, some of which led to the characteristics of complete, modern memory management schemes– Dynamic loading– Dynamic linking and shared libraries– Overlays

Page 28: Chapter 8, Main Memory

28

Dynamic loading

• Dynamic loading is a precursor to paging, but it isn’t efficient enough for a modern environment

• It is reminiscent of medium term scheduling• One of the assumptions so far has been that a

complete program had to loaded into memory in order to run

• Consider the alternative scenario given on the next overhead

Page 29: Chapter 8, Main Memory

29

• 1. Separate routines of an application are stored on the disk in relocatable format

• 2. When a routine is called, first it’s necessary to check if it’s already been loaded. – If so, control is transferred to it

• 3. If not, the loader immediately loads it and updates its address table– An application/routine address table entry contains the

value that would go into the relocation register for each of the routines, when it’s running

Page 30: Chapter 8, Main Memory

30

Dynamic linking and shared libraries

• To understand dynamic linking, consider what static linking would mean

• If every user program that used a system library had to have a copy of the system code bound into it, that would be static linking

• This is clearly inefficient. • Why make multiple copies of shared code in

loaded program images?

Page 31: Chapter 8, Main Memory

31

• Under dynamic linking, a user program contains a special stub where system code is called

• At run time, when the stub is encountered, a system call checks to see whether the needed code has already been loaded by another program

• If not, the code is loaded and execution continues• If the code was already loaded, then execution

continues at the address where the system had loaded it

Page 32: Chapter 8, Main Memory

32

• Dynamic linking of system libraries supports both transparent library updates and the use of different library versions

• If user code is dynamically linked to system code, if the system code changes, there is no need to recompile the user code.

• The user code doesn’t contain a copy of the system code

Page 33: Chapter 8, Main Memory

33

• If different versions of libraries are needed, this is straightforward

• Old user code will use whatever version was in effect when it was written

• New versions of libraries need new names (i.e., names with version numbers) and new user code can be written to use the new version

Page 34: Chapter 8, Main Memory

34

• If it is desirable for old user code to use the new library version, the old user code will have to be changed so that the stub refers to the new rather than the old

• Obviously, the ability to do this is all supported by system functionality

Page 35: Chapter 8, Main Memory

35

• The fundamental functionality, from the point of view of memory management, is shared access to common memory

• In general, the memory space belonging to one process is disjoint from the memory space belonging to another

• However, the system may support access to a shared system library in the virtual memory space of more than one user process

Page 36: Chapter 8, Main Memory

36

Overlays

• This is a technique that is very old and has little modern use

• It is possible that it would have some application in modern environments where physical memory was extremely limited

Page 37: Chapter 8, Main Memory

37

• Suppose a program ran sequentially and could be broken into two halves where no loop or if reached from the second half back to the first

• Suppose that the system provided a facility so that a running program could load an executable image into its memory space

• This is reminiscent of forking where the fork() is followed by an exec()

Page 38: Chapter 8, Main Memory

38

• Suppose those requirements were met and memory was large enough to hold half of the program but not all of the program

• Write the first half and have it conclude by loading the second half

Page 39: Chapter 8, Main Memory

39

• This is not simple to do, it requires system support, it certainly won’t solve all of your problems, and it would be prone to mistakes

• However, something like this may be necessary if memory is tiny and the system doesn’t support advanced techniques like paging and virtual memory

Page 40: Chapter 8, Main Memory

40

8.2 Swapping

• Swapping was mentioned before as the action taken by the medium term scheduler

• Remember to keep the term distinct from switching, which refers to switching loaded processes on and off of the CPU

• In this section, swapping will refer to the approach used to support multi-programming in systems with limited memory

Page 41: Chapter 8, Main Memory

41

• Elements of swapping existed in early versions of Windows

• Swapping continues to exist in Unix environments

Page 42: Chapter 8, Main Memory

42

• This is the scenario for swapping:• Execution images for >1 job may be in

memory• The long term scheduler picks a job from the

input queue• There isn’t enough memory for it• So the image of a currently inactive job is

swapped out and the new job is swapped in

Page 43: Chapter 8, Main Memory

43

• Medium term scheduling does swapping on the grounds that the multi-programming level is too high

• In other words, the CPU is the limiting resource• Swapping as discussed now is implemented

because memory space is limited• Note that swapping for either reason isn’t

suitable for interactive type processes

Page 44: Chapter 8, Main Memory

44

• Swapping is slow because it writes to a swap space in secondary storage

• Swapping can be useful as a protection against limited resources, whether CPU (medium term scheduling) or memory (swapping as described here)

• However, transferring back and forth from the disk is definitely not a time-effective strategy for supporting multi-programming, let lone multi-tasking, on a modern system

Page 45: Chapter 8, Main Memory

45

8.3 Contiguous Memory Allocation

• Along with the other assumptions made so far, such as the fact that all of a program has to be loaded into memory, another assumption is made

• In simple systems, the whole program is loaded, in order, from beginning to end, in one block of physical memory

Page 46: Chapter 8, Main Memory

46

• Referring back to earlier chapters, the interrupt vector table is assigned a fixed memory location

• O/S code is assigned a fixed location• User processes are allocated contiguous blocks

in the remaining free memory• Valid memory address references for

relocatable code are determined by a base address and a limit value

Page 47: Chapter 8, Main Memory

47

• The base address corresponds to relative address 0

• The limit tells the amount of memory allocated to the program

• In other words, the limit corresponds to the largest valid relative address

Page 48: Chapter 8, Main Memory

48

• The limit register contains the maximum relative address value.

• The relocation register contains the base address allocated to the program

• Keep in mind that when context switching, these registers are among those that the dispatcher sets

• The following diagram illustrates the MMU in more detail under these assumptions

Page 49: Chapter 8, Main Memory

49

MMU functionality with relative addresses, contiguous memory allocation, and limit and relocation registers

Page 50: Chapter 8, Main Memory

50

Memory allocations

• A simple scheme for allocating memory is to give processes fixed size partitions

• A slightly more efficient scheme would vary the partition size according to the program size

• The O/S keeps a table or list of free and allocated memory

Page 51: Chapter 8, Main Memory

51

• Part of scheduling becomes determining whether there is enough memory to load a new job

• Under contiguous allocation, that means finding out whether there is a “hole” (window of free memory) large enough for the job

• If there is a large enough hole, in principle, that makes things “easy” (stay tuned)

Page 52: Chapter 8, Main Memory

52

• If there isn’t a large enough hole you have two choices:

• A. Wait and schedule the new process when a large enough hole becomes available

• B. Set the current new job aside and have the scheduler search for jobs in the input queue that are small enough to fit into available holes

Page 53: Chapter 8, Main Memory

53

The dynamic storage allocation problem

• This is a classic problem of memory management

• The assumption is that scattered throughout memory are various holes of contiguous memory large enough for the process to be loaded into them

• The question is how to choose which of those holes to load the process into

Page 54: Chapter 8, Main Memory

54

• Historically, three algorithms have been considered

• 1. First fit: Put a process into the first hole found that’s big enough for it. – This is fast and allocates memory efficiently

• 2. Best fit: Look for the hole closest in size to what’s needed. – This is not as fast and it’s not clearly better in

allocation

Page 55: Chapter 8, Main Memory

55

• 3. Worst fit: This essentially means, load the job into the largest available hole. – In practice it performs as well as its name, but see

the following bullets• Note that for any of these three choices, the

question is not where in the hole to load the process

• For the sake of argument, assume that it will be loaded at the beginning of the hole

Page 56: Chapter 8, Main Memory

56

External fragmentation

• External fragmentation describes the situation when memory has been allocated to processes leaving lots of scattered, small holes

• If sufficiently small, the holes are wasted memory space under contiguous loading

• Even though worst fit doesn’t work, the idea behind it was to leave usable size holes

Page 57: Chapter 8, Main Memory

57

• Empirical studies have shown that for an amount of allocated memory measured as N, an amount of memory approximately equal to .5N will be lost due to external fragmentation

• This is known as the 50% rule• In other words, under contiguous memory

allocation about 1/3 of memory is wasted due to unusable, small memory holes external to the blocks that are successfully allocated

Page 58: Chapter 8, Main Memory

58

Block allocation

• In reality, memory is typically allocated in fixed size blocks rather than exact byte counts corresponding to process size

• Keeping track of arbitrary, varying amounts of memory allocation is not practical due to the overhead involved

• A block may consist of 1KB or some other measure of similar magnitude or larger

Page 59: Chapter 8, Main Memory

59

• Under block allocation, a process is allocated enough contiguous blocks to contain the whole program

• External fragmentation still results under block allocation

• The smallest possible hole will be one block

Page 60: Chapter 8, Main Memory

60

Internal fragmentation

• Something called internal fragmentation also results from block allocation

• This refers to the wasted memory in the last block allocated to a process

• Internal fragmentation on average is equal to ½ of the size of one block

Page 61: Chapter 8, Main Memory

61

Picking a block size

• Picking a block size is a classic case of balancing extremes

• If block size is large enough, each process will only need one block.

• This degenerates into fixed partitions for processes, with large waste due to internal fragmentation

Page 62: Chapter 8, Main Memory

62

• If block size is small enough, you approach allocating byte by byte, which is undesirable due to record keeping overhead

• If the blocks are small, internal fragmentation is insignificant, but this is not an overriding advantage

Page 63: Chapter 8, Main Memory

63

• Block allocation is a desirable enhancement of contiguous memory allocation, but it’s still contiguous memory allocation

• It is reasonable to assume that external fragments, even if measure in units of blocks rather than bytes, can become small enough to be unusable

Page 64: Chapter 8, Main Memory

64

Memory compaction

• Memory compaction is an approach to solving the fragmentation resulting from contiguous memory allocation

• Compaction refers to relocating programs loaded in memory in order to reduce fragmentation

• Relocation is a system process that happens dynamically, without unloading the user processes

Page 65: Chapter 8, Main Memory

65

• If programs use absolute memory addresses, they simply can’t be relocated.

• Memory couldn’t be compacted without recompiling the programs.

• This would require unloading them and loading the recompiled code

• This is out of the question• It would not be a dynamic process

Page 66: Chapter 8, Main Memory

66

• If programs use relative memory addresses, they are relocatable.

• Even during run time, they can be moved to new memory locations

• The system relocation process accomplishes this by doing the relocating and updating the base and offset register values for user processes

• Relocation makes it possible to squeeze the loaded programs together in memory, squeezing out the unusable fragments

Page 67: Chapter 8, Main Memory

67

8.4 Paging

• Paging is a big deal• Fundamentally, paging is a memory management

technique that makes it possible to load a program into non-contiguous memory

• A page is a fix-sized block• A program may be large enough that it has to be

loaded into more than one page• But the program does not have to be loaded into

a contiguous set of pages

Page 68: Chapter 8, Main Memory

68

• Paging solves two problems:• 1. Under paging, external fragmentation is not a

problem.– Even a single, isolated, unallocated page is still usable– It can be allocated as part of a non-contiguous

allocation• Another way of putting this is that with paging,

memory compaction will never be needed

Page 69: Chapter 8, Main Memory

69

• 2. Under paging, fragmentation in the swap space in secondary storage is also eliminated– When memory compaction was discussed above, its

relationship to swapping was not mentioned– It turns out that memory compaction and swapping

are incompatible, because compacting the secondary storage space to match the reorganized memory space would take too long

– Memory compaction is not necessary with paging, so this problem is solved

Page 70: Chapter 8, Main Memory

70

How paging is implemented

• Paging is based on the idea that the O/S can maintain data structures that match given blocks in physical memory with given ranges of virtual addresses in programs

• Physical memory is conceptually broken into fixed size frames

• Logical memory is broken into pages of the same size

Page 71: Chapter 8, Main Memory

71

• In essence, the O/S maintains a lookup table telling which logical page matches with which physical frame

• In contiguous memory allocation there was a limit register and a relocation register

• In paging there are special registers for placing the logical address and forming the corresponding physical address

Page 72: Chapter 8, Main Memory

72

• In paging, fixed page sizes mean that the limits are always the same, but there is a table containing the relocation values telling which frame each page address is relocated to

• It is important to understand that under paging, allocation isn’t contiguous, but complete programs do have to be loaded

• For a program of x pages, x frames will be needed• The number of frames allocated will differ for

programs of different sizes

Page 73: Chapter 8, Main Memory

73

Implementation Details

• Every (logical) address generated by the CPU takes this form:

• Page part (p) | offset part (d)• The page part is a page id• The offset part is the location of a given word

within the page that contains it• More specifically, let an address consist of m bits• Then a logical address can be pictured as shown on

the next overhead

Page 74: Chapter 8, Main Memory

74

Page 75: Chapter 8, Main Memory

75

• The addresses are binary numbers• There are m bits for the address overall• That means the address space consists of 2m

pages• The range of valid addresses goes from 0 to 2m

– 1

Page 76: Chapter 8, Main Memory

76

• The components of the address fall neatly into two parts

• The (m – n) digits for p can be treated separately as a page number in the range from 0 to 2(m – n) – 1

• The n digits for d can be treated separately as an offset in the range from 0 to 2n – 1

• The fact that n bits are reserved for the offset into a page implies that the size of a page is 2n bytes

Page 77: Chapter 8, Main Memory

77

Forming an address from a page table

• Paging is based on maintaining a page table• For some page value p, the corresponding

frame value f is looked up in the page table• The lookup is done at offset p in the table• The offset d, is unchanged

Page 78: Chapter 8, Main Memory

78

• The physical address is formed by appending the binary value for d to the binary value for f

• The result is f | d• The formation of a physical address from a

logical address, p | d, using a page table, is illustrated in the following diagram

Page 79: Chapter 8, Main Memory

79

Page 80: Chapter 8, Main Memory

80

The contents of a page table

• In theory you could have a global page table containing entries for all processes

• In practice, each process may have its own page table which is used when that process is scheduled

• When a process is initially scheduled by the long term scheduler, its page table would be populated with the frames allocated to it

Page 81: Chapter 8, Main Memory

81

• When the short term scheduler context switches between processes, an address register pointing to the page table would be changed

• The use of the page table for a single process can be illustrated with a simple example

• Each page table entry is like a base and offset for a given page in the process

Page 82: Chapter 8, Main Memory

82

Page 83: Chapter 8, Main Memory

83

• Note again that under paging there is no external fragmentation

• Every empty physical memory space is a usable frame

• Internal fragmentation will average one half of a frame per process

Page 84: Chapter 8, Main Memory

84

Page sizes

• In modern systems page sizes vary in the range of around 512 bytes to 16MB

• The smaller the page size, the smaller the internal fragmentation

• However, if the memory space is large, there is overhead in allocating small pages and maintaining a page table with lots of entries

Page 85: Chapter 8, Main Memory

85

• As hardware resources have become less costly, larger memory spaces have become available

• Page sizes have grown correspondingly large• Page sizes of 2K-8K may be considered

representative of an average, modern system

Page 86: Chapter 8, Main Memory

86

Summary of paging ideas

• 1. The logical view of the address space is separate from the physical view. – This means that code is relocatable, not absolute

• 2. The logical view is of contiguous memory. • Paging is completely hidden by the MMU. – Allocation of frames is not contiguous– However, programs have to be loaded in their

entirety

Page 87: Chapter 8, Main Memory

87

• 3. Although the discussion has been in terms of the page table, in reality there is also a global frame table. – The frame table provides the system with ready look-

up of which frames have been allocated, and which are free and still available for allocation

• 4. There is a page table for each process. – It keeps track of memory allocation from the process

point of view and supports the translation from logical to physical addresses

Page 88: Chapter 8, Main Memory

88

Hardware support for paging

• A page table has to hold the mapping from logical pages to physical frames for a single process

• Note that the page table resides in memory• The minimum hardware support for paging is

a dedicated register on the chip which holds the address of the page table of the currently running process

Page 89: Chapter 8, Main Memory

89

• With this minimal support, for each logical memory address generated by a program, two accesses to actual memory would be necessary

• The first access would be to the page table, the second to the physical address located there

• This is expensive• In order to support non-contiguous allocation, the

cost of a memory access is doubled

Page 90: Chapter 8, Main Memory

90

• In order to be viable, paging needs additional hardware support.

• There are two basic choices• 1. Dedicated registers• 2. Translation look-aside buffers

Page 91: Chapter 8, Main Memory

91

• 1. Have a complete set of dedicated registers for the page table. – That is, each page table entry would reside in a

register– There would have to be as many registers as the

maximum number of frames that could be allocated per process

– This is fast, but the hardware cost (monetary and real estate on the chip) becomes impractical if the memory space is large

Page 92: Chapter 8, Main Memory

92

• 2. The chip will contain hardware elements known as translation look-aside buffers (TLB’s). – This is the current state of the art, and it will be

explained below

Page 93: Chapter 8, Main Memory

93

Translation look-aside buffers

• Translation look-aside buffers are in essence a special set of registers which support look-up.

• In other words, they are table-like. • They are designed to contain keys, p, page

identifiers, and values, f, the matching frame identifiers

• They are different from dedicated registers• They are designed to hold a subset of the page

table

Page 94: Chapter 8, Main Memory

94

• TLB’s have an additional, special characteristic. • They are not independent buffers. • They come as a collection• The “look-aside” part of the name is meant to

suggest that when a search value is “dropped” onto the TLB, for all practical purposes, all of the buffers are searched for that value simultaneously.

Page 95: Chapter 8, Main Memory

95

• If the search value is present, the matching value is found within a fixed number of clock cycles

• In other words, look-up in a TLB does not involve linear search or any other software search algorithm.

• There is no order of complexity to searching depending on the number of entries in the collection of TLB’s.

• Response time is fixed and small

Page 96: Chapter 8, Main Memory

96

• TLB’s are like a highly specialized cache• The set of TLB’s doesn’t store a whole page table• When a process starts accessing pages, this

requires reading the page table and finding the frame

• Once a page has been read the first time, it’s entered into the TLB

• Subsequent reads to that page will not require reading from the page table in memory

Page 97: Chapter 8, Main Memory

97

• Just like with caching, some process memory accesses will be a TLB “hit” and some will be a TLB “miss”

• A hit is very economical• With a hit, a memory access requires a

reference to the TLB followed by one main memory access

Page 98: Chapter 8, Main Memory

98

• A miss requires reading the page table and replacing (the LRU) entry in the TLB with the most recent page accessed

• In other words, a miss incurs the full “double” cost of reading the accessing memory twice

• The first access updates the TLB and the second finds the desired memory address

• Memory management with TLB’s is shown in the following diagrams

Page 99: Chapter 8, Main Memory

99

Page 100: Chapter 8, Main Memory

100

• In the following diagram, the page table is shown in memory, where it’s located.

• The ALU, TLB’s, and logical and physical address registers are all in the CPU.

• The TLB’s and address registers are in the MMU of the CPU.

• A program running in the ALU generates a logical memory address which is passed to the MMU, which translates it to a physical address and reads from or writes to it.

Page 101: Chapter 8, Main Memory

101

Page 102: Chapter 8, Main Memory

102

• Note the following things about the diagram• The page table is complete, so a search of the

page table simply means jumping to offset p in the table

• The TLB is a subset, so it has to have both key, p, and look-up, f values in it

• It shows addressing, but it doesn’t attempt to show, through arrows or other notation, the replacement of TLB entries on a miss

Page 103: Chapter 8, Main Memory

103

TLB hits and misses

• Paging costs can be summarized in this way• On a hit: TLB access + memory access• On a miss: TLB access + memory access to

page table + memory access to desired page• The book states that typical TLB’s are in the

range from 16 to 512 entries• With this number of TLB’s, a hit ratio of 80%-

98% can be achieved

Page 104: Chapter 8, Main Memory

104

Calculating the cost of paging

• Given a hit ratio and some sample values for the time needed for TLB and memory access, weighted averages for the cost of paging can be calculated

• For example, let the time needed for a TLB search be 20 ns.

• Let the time needed for a main memory access be 100 ns.

Page 105: Chapter 8, Main Memory

105

• Cost of TLB hit: 20 + 100 = 120• Cost of TLB miss: 20 + 100 + 100 = 220• Let the hit ratio be 80%• Then the overall, weighted cost of paging

is: .8(120) + .2(220) = 140

Page 106: Chapter 8, Main Memory

106

• In other words, if you could always access memory directly, it would take 100 ns.

• With paging, it takes on average 140 ns.• Paging imposes a 40% overhead on memory

access• On the other hand, without TLB’s, every memory

access would cost 100 ns. + 100 ns., which would mean a 100% overhead on memory access

Page 107: Chapter 8, Main Memory

107

Justification for paging

• Why would you live with a 40% overhead cost on memory accesses?

• Remember the reasons for introducing the idea of paging:

• It allows for non-contiguous memory allocation

Page 108: Chapter 8, Main Memory

108

• This solves the problem of external fragmentation in memory

• As long as the page size strikes a balance between large and small, internal fragmentation is not great

• There is also a potential benefit in reducing fragmentation in swap space—but supporting contiguous memory allocation is the main event

Page 109: Chapter 8, Main Memory

109

Having a global page table

• The previous discussion has referred to a page table as belonging to one process

• This would mean there would be many page tables

• When a new process was scheduled, the TLB would be flushed so that pages belonging to the new process would be loaded.

Page 110: Chapter 8, Main Memory

110

• The alternative is to have a single, unified page table

• This means that each page table entry, in addition to a value for f, would have to identify which process it belonged to

• The identifier is known as an ASID, an address space id

Page 111: Chapter 8, Main Memory

111

• Such a table would work like this:• When a process generated a page id, the TLB

would be searched for that page• If found, it would further be checked to see if

the page belonged to the process• If so, everything is good

Page 112: Chapter 8, Main Memory

112

• If not, this is simply a page miss• Replacement would occur using the usual

algorithm for replacement on a miss• With a page table like this, there is no need for

flushing when a new process is scheduled• In effect, the TLB is flushed entry by entry as

misses occur

Page 113: Chapter 8, Main Memory

113

Implementing protection in the page table with valid and invalid bits

• Recall that a page table functions like a set of base and limit registers

• Each page address is a base, and the fixed page size functions as a limit

• If a system maintains page tables of length n, then the maximum amount of memory that could theoretically be allocated to a process is n pages, or n * (page length) bytes

Page 114: Chapter 8, Main Memory

114

• In practice, processes do not always need the maximum amount of memory and will not be allocated that much

• This information can be maintained in the page table by the inclusion of a valid/invalid bit

Page 115: Chapter 8, Main Memory

115

• If a page table entry is marked “i”, this means that if a process generates that logical page, it is trying to access an address outside of the memory space that was allocated to it

• A diagram of the page table follows

Page 116: Chapter 8, Main Memory

116

Page 117: Chapter 8, Main Memory

117

A page table length register

• An alternative to valid/invalid bits is a page table length register (PTLR)

• The idea is simple—this register is like a limit register for the page table

• The range of logical addresses for a given process begins at page 0 and goes to some maximum which is less than the absolute maximum size allowed for a page table

• When a process generates a page, it is checked against the PTLR to see if it’s valid

Page 118: Chapter 8, Main Memory

118

• The valid/invalid bit scheme can be extended to support finer protections

• For example, read/write/execute protections can be represented by three bits

• You typically think of these protections as being related to a file system

Page 119: Chapter 8, Main Memory

119

• In theory, different pages of a process could have different attributes

• This may be especially important if you are dealing with shared memory accessible to >1 process

• It is also likely to be complicated in practice, and the idea won’t be pursued further here

Page 120: Chapter 8, Main Memory

120

8.5 Structure of the Page Table

Page 121: Chapter 8, Main Memory

121

• The topic of this section is the structure of page tables

• Before considering the structure, it’s helpful to consider the sizes of address spaces that a page table may have to support

• Modern systems may support address spaces in the range of 232 to 264 bytes

• 232 is 4 Gigabytes• 264 ~= 18.4+ x 1018

Page 122: Chapter 8, Main Memory

122

• The higher value is what you get if you allow all 64 bits of a 64 bit architecture to be used as an address

• Note that this is 16 x 260, but by this stage the powers of 2 and the powers of 10 do not match up the way they do where we casually equate 210 to 103

Page 123: Chapter 8, Main Memory

123

This is just a digression• According to Wikipedia• Standard prefixes for the SI units of measure • Multiples• Name: deca- hecto- kilo- mega- giga- tera- peta- exa- zetta-

yotta- • Symbol: da h k M G T P E Z Y • Factor: 100 101 102 103 106 109 1012 1015 1018 1021 1024 • Subdivisions • Name: deci- centi- milli- micro- nano- pico- femto- atto- zepto-

yocto- • Symbol: d c m µ n p f a z y • Factor: 100 10−1 10−2 10−3 10−6 10−9 10−12 10−15 10−18 10−21 10−24

Page 124: Chapter 8, Main Memory

124

• The reality is that modern systems support logical address spaces too large for simple page tables

• In order to support these address spaces, hierarchical or multi-level paging is used

• Take the lower of the address spaces given above, 232

• Let the page size be 212 or 4 KB

Page 125: Chapter 8, Main Memory

125

• 232 bytes of memory divided into pages of size 212 bytes means a total of 220 pages

• The corresponding physical address space would consist of 220 frames

• That means that each page table entry would have to be at least 20 bits long, in order to hold the frame id

Page 126: Chapter 8, Main Memory

126

• Suppose each page table entry is 4 bytes, or 32 bits, long

• This would allow for validity and protection bits in addition to the frame id

• It’s also simpler to argue using powers of 2 rather than speaking in terms of a table entry of length 3 bytes

Page 127: Chapter 8, Main Memory

127

• A page table with 220 entries each of size 22 bytes means the page table is of length 222, or 4 MB

• But a page itself under this scenario was only 212, or 4 KB

• In other words, it would take 1 K of pages to hold the complete page table for a process that had been allocated the theoretical maximum amount of memory possible

Page 128: Chapter 8, Main Memory

128

• To restate the result in another way, the page table won’t fit into a single page

• In theory, it might be possible to devise a hybrid system where the memory for page tables was allocated and addressed by the O/S as a monolithic block instead of in pages

• Then that page table would support paging of user memory

Page 129: Chapter 8, Main Memory

129

• Having two different addressing schemes in the same system would be a mess and leads to questions like, could there be fragmentation in the monolithic page table block?

• It is preferable not to have the page table consist of monolithic (and contiguous) memory

Page 130: Chapter 8, Main Memory

130

• The practical solution to the problem is hierarchical or multi-level paging

• The underlying idea is to come up with a scheme where a large page table can be managed as a collection of individual pages

• In one of its forms, multi-level paging is similar to indexing

• The book refers to this as a forward-mapped page table

Page 131: Chapter 8, Main Memory

131

• Under multi-level paging, given a logical page value, you don’t look up the frame id directly

• You look up another page that contains a page id for the page containing the desired frame id

• The book mentions that this kind of scheme was used by the Pentium II

Page 132: Chapter 8, Main Memory

132

• The multi-level paging scheme will be illustrated in the following diagrams

• A logical address of 32 bits can be divided into blocks of 10, 10, and 12 bits

• 10 + 10 = 20 bits correspond to the page identifier

• The remaining 12 bits correspond to d, the offset into a page of size 212 bytes

Page 133: Chapter 8, Main Memory

133

• There is a reason for treating the first 20 bits as two blocks of 10 bits

• The example illustrates a two level page table scheme

• The size of a page is 212 bytes• If a page table entry is 4 bytes (22 bytes) wide,

then a page can hold 210 page table entries

Page 134: Chapter 8, Main Memory

134

• Conceptually, the first 10 bits in an address will be used as an offset into an outer page table

• The entry found in that table will refer to one of 210 inner page tables

• The second 10 bits in the address will be used as an offset into that inner page table

• The entry found there will refer to a page that is in the address space of the process

Page 135: Chapter 8, Main Memory

135

• The last 12 bits of the address will be the offset into the memory page (frame) allocated to the process

• 12 bits are used for this instead of 10• That’s because addressing of allocated memory

pages is byte-by-byte, and a page contains 212 bytes

• These ideas are graphically illustrated on the following overheads

Page 136: Chapter 8, Main Memory

136

This is the form of a page address

Page 137: Chapter 8, Main Memory

137

This is how a logical address maps to a physical address through multiple levels

Page 138: Chapter 8, Main Memory

138

This shows the multiple layers of the page table

Page 139: Chapter 8, Main Memory

139

Calculating the cost of paging using a multi-level page table

• The cost of a page miss will be higher for a two level page table than for a one level table

• This is because three hits to the page table, three hits to memory, would be needed to find a missing address rather than two

Page 140: Chapter 8, Main Memory

140

• As before, let the time needed for a TLB search be 20 ns.

• Let the time needed for a main memory access be 100 ns.

• Cost of TLB hit: 20 + 100 = 120• Cost of TLB miss: 20 + 100 + 100 + 100 = 320

Page 141: Chapter 8, Main Memory

141

• In the calculation for the miss, the first 100 is the outer page table, the second 100 is the inner page table, the third 100 is the access to the desired address

• Let the hit ratio be 98%• Then the overall, weighted cost of paging

is: .98(120) + .02(320) = 124• The overhead cost of paging under this scheme is

24%

Page 142: Chapter 8, Main Memory

142

Larger address spaces

• Observe what happens if you go to a 64 bit address space and a page size of 4KB

• Sample address breakdowns are shown on the next overhead for two and three level paging

• If you only break the address into three or four parts, the number of bits for one of the parts is so high that you again have the problem that a level of the page table won’t fit into a single page

Page 143: Chapter 8, Main Memory

143

Page 144: Chapter 8, Main Memory

144

• Depending on page size, some 32 bit systems go to 3 or 4 levels

• To implement multi-level paging in a 64 bit system, you would need 6 levels

• This is too deep to be practical• A page miss would involve seven accesses to

memory• This makes the cost of paging too high

Page 145: Chapter 8, Main Memory

145

Hashed page tables--Hashing

• Hashed page tables provide an alternative approach to multi-level paging in a large address space

• The first thing you need to keep in mind is what hashing is, how it works, and what it accomplishes

Page 146: Chapter 8, Main Memory

146

How hashing works

• You may have a widely dispersed set of n different x values in a given domain

• You have a specific, compact set of y values that you want to map to in the range.

• You need a hashing function, y = f(x), that converts x values into the desired set of y values in the range

Page 147: Chapter 8, Main Memory

147

• In the ideal case, there would be a set of exactly n different, contiguous y values that the x’s map to

• That would mean that no two x values would ever collide

• However, this doesn’t typically happen• f() needs to be devised so that the likelihood that

any two x values will give the same y value is small

Page 148: Chapter 8, Main Memory

148

• f() also has to be quick and easy to compute• In practice the range will be somewhat larger

than n and collisions may occur• The most common kind of hashing function is

based on division and remainders

Page 149: Chapter 8, Main Memory

149

• Choose z to be the smallest prime number larger than n

• Then let f(x) = x % z• f(x) will fall into the range [0, z – 1]

Page 150: Chapter 8, Main Memory

150

• Hashing makes it possible to create a look-up table that doesn’t require an index or any sorting or searching

• Let there be z – 1 entries in the table• Store the entry for x at the offset y = f(x) in the

table• When x occurs again and you want to look up the

corresponding value in the table, compute y = f(x) and read the entry at that offset y

Page 151: Chapter 8, Main Memory

151

• Note that the value, x, has to be repeated as part of the table entry, along with value that goes along with it that you’re trying to look up

• This is necessary in order to resolve collisions• An example of a hashing algorithm and the

resulting hash table is illustrated in the following diagram

Page 152: Chapter 8, Main Memory

152

Page 153: Chapter 8, Main Memory

153

Hashed page tables—Why?

• Consider again the background of multi-level paging and its disadvantages

• A multi-level page table provides a tree-like way of using pages to access the whole memory address space

• Each level in the tree corresponds to a block of bits in an address

Page 154: Chapter 8, Main Memory

154

• As the address space grows large, multiple levels become necessary to resolve a given logical address into a physical address in memory

• The more levels there are, the more memory accesses are needed to arrive at the desired address

Page 155: Chapter 8, Main Memory

155

• Consider the following scenario:• If the memory is large enough, page size can be

large• Let n be the number of page addresses that can

be held in a single page• Suppose that the page size is large enough and

the processes are comparatively small enough that n * page size can serve as the maximum memory allocation for a process

Page 156: Chapter 8, Main Memory

156

• Under this scenario, the page table would only be one page long.

• The question is how to organize and access it.• The logical addresses generated by a process

do not necessarily fall neatly in the range of 0 to n – 1 pages (plus offsets into the pages).

Page 157: Chapter 8, Main Memory

157

• Whatever range the page parts of the addresses do fall into, you would like them to map into the range 0 to n – 1

• That would mean that you could have a look-up/page table of n entries which tell which physical frame has been allocated to each logical page.

• Hashing supports this kind of mapping from a possibly widely varying set of logical page values to a range from 0 to n – 1.

Page 158: Chapter 8, Main Memory

158

• When a virtual page is allocated a frame, the virtual page id, p, is hashed to a location in the hash table

• The hash/page table entry contains p, to account for collisions, and the id, f, of the allocated frame

• The offset portion of the address is carried along as usual

• See the following diagram

Page 159: Chapter 8, Main Memory

159

Page 160: Chapter 8, Main Memory

160

• In this illustration, a collision is shown• Collisions are handled with links rather than

overflow• The two logical pages, q and p, hash to the same

location• Their corresponding frames are s and r, respectively• The hash key, p, is included in the entry so that you

can identify the correct entry when a collision occurs

Page 161: Chapter 8, Main Memory

161

• Suppose the logical address space for a process was contiguous, starting with 0

• Then it might not really be necessary to hash• Entries could just be placed at the offset

corresponding to the logical address• Notice that this scheme will support a logical

address space for an arbitrary selection of p values in addresses

Page 162: Chapter 8, Main Memory

162

Clustered page tables

• The book doesn’t give a very detailed explanation of this

• The general idea appears to be that memory can be allocated so that these properties hold:

• Several different (say 16) page id’s, p, will hash to the same entry in the page table

• This entry will then have no fewer than 16 linked nodes, one for each page, (and possibly more, due to collisions)

Page 163: Chapter 8, Main Memory

163

• Honestly, it’s not clear to me what advantage this gives

• The length of the page table would be reduced by a factor of 16, but it seems that the linked entries would effectively increase its width by a factor of 16

• I have no more to say about this, and there will be no test questions on it

Page 164: Chapter 8, Main Memory

164

Inverted page tables

• Inverted page tables are an important alternative to multi-level page tables and hashed page tables

• Recall that with (non-inverted) page tables:• 1. The system has to maintain a global frame

table that tells which frames are allocated to which processes

Page 165: Chapter 8, Main Memory

165

• 2. The system has to maintain a page table for each process, that makes it possible to look up the physical frame that is allocated to a given logical address

• Simple illustrations of both of these things are given on the next overhead

Page 166: Chapter 8, Main Memory

166

Page 167: Chapter 8, Main Memory

167

• An inverted page table is an extension of the frame table

• Instead of many page tables, one for each process, there is one master table

• The offsets into the table represent the frame id’s for the whole physical memory space

• The table has two columns, one for pid, the process that the frame/page belongs to, and one for p, the logical page id of the page

Page 168: Chapter 8, Main Memory

168

Page 169: Chapter 8, Main Memory

169

• The use of an inverted page table to resolve a logical address is shown in the diagram on the next overhead

• The key thing to notice about the process is that it is necessary to do linear search through the inverted page table, looking for a match on the pid that generated the address and the logical address that was generated

• The offset into the table identifies the frame that was allocated to the page

Page 170: Chapter 8, Main Memory

170

Page 171: Chapter 8, Main Memory

171

• On the one hand, you’ve gone from one frame table and many page tables to one, unified, inverted page table

• On the other hand, searching the inverted page table is the cost of this approach

• There is no choice except for simple, linear search because the random allocation of frames means that the table entries are not in any order

• It is not possible to do binary search or anything else

Page 172: Chapter 8, Main Memory

172

Hashed Inverted Page Tables

• This is where hashing and inverted page tables come together

• The way to get direct access to a set of values in random order is to hash

• Let n be the total number of pages/frames in the system, and devise a hashing function that will provide this mapping:

• f(pid, p) [0, n – 1]• Use this function to allocate frames to processes

Page 173: Chapter 8, Main Memory

173

• Then, when the logical address (pid, p) is generated, hash it, giving f(pid, p)

• In theory, the hash function value itself could be the frame id, f

• You still have to do table look-up because of the possibility of collisions

• Look-up consists of going to offset f in the table and checking there for the key values (pid, p).

Page 174: Chapter 8, Main Memory

174

• If (pid, p) is found, then f is the corresponding frame

• You don’t have to do linear search• If the values are not found, check for overflow

or linking until you find the desired values• Note that if you don’t find the desired values

at all, the process has tried to access an address that is out of range.

Page 175: Chapter 8, Main Memory

175

• The most recent discussions have left TLB’s behind, but they are still relevant as hardware support for addressing

• In a system that uses a hashed inverted page table with TLB’s, the TLB entries are a subset of the hashed inverted page table

• The TLB entries appear in whatever results from random replacement

• The entries are not in hash (sorted) order

Page 176: Chapter 8, Main Memory

176

• When an entry from the inverted hashed page table is put into the TLB, in addition to the pid of the process and the logical page id, p, the entry has to include the offset (index, i) into the page table, namely, the frame id, f

• This is what you look up in the TLB• A diagram of the use of a hashed inverted page

table with TLB’s is given on the following overhead

Page 177: Chapter 8, Main Memory

177

Page 178: Chapter 8, Main Memory

178

• In looking at the picture, remember that since the table is stored in memory, that adds an extra memory access to the overall cost of addressing on a TLB miss

• Also note that in order to accommodate all frames in the system, in reality the table would probably be bigger than a page

• The table that supports paging is bigger than a page itself

Page 179: Chapter 8, Main Memory

179

• In other words, we’ve come back around to the problem which motivated multi-level paging in the first place

• The solution has something in common with the solution that was rejected earlier

• The table would be stored in system space• It would be accessed through a special scheme, a

non-paged, system memory access mechanism• However, it would support paging in user applications

Page 180: Chapter 8, Main Memory

180

• The previous discussion included the assumption that you could allocate frames based on hashing

• This simplified things and made the diagram easier to draw

• In reality, you would still have a frame table that recorded which frame was allocated to which page

• You would then have a separate hash table that supported look-up into the frame table

Page 181: Chapter 8, Main Memory

181

• The idea is that the has value, f(pid, p), call this h, takes you to an offset in the hash table.

• What you look up in the hash table is the value i, which is the frame id that was assigned to pid|p

• In other words, i is the index or offset to the corresponding entry in the frame table.

Page 182: Chapter 8, Main Memory

182

Page 183: Chapter 8, Main Memory

183

Shared pages

• Shared memory between processes can be implemented by mapping their logical addresses to the same physical pages (frames)

• An operating system may support interprocess communication (IPC) this way

• It is also a convenient way to share (read only) data

• It’s also possible to share code, such as libraries which >1 process need to run

Page 184: Chapter 8, Main Memory

184

Reentrant code is shareable

• In order for code to be shareable, it has to be reentrant

• Reentrant means that there is nothing in the code which causes it to modify itself

• Consider the MISC sumtenV1.txt example• It is divided into a data segment and a code segment• Two processes could share the code as long as the

accesses to memory variables were mapped to separate copies of the variables

Page 185: Chapter 8, Main Memory

185

• Every memory access that a program makes has to pass through the O/S

• The O/S is responsible for protecting the memory allocated to one process from being accessed by another

• The O/S is also responsible for supporting shared access and and for detecting when shared memory may be being misused

Page 186: Chapter 8, Main Memory

186

• Threads are a good, concrete example of shared code

• We have considered some of the problems that can occur when threads share references to common objects

• If they share no references, then they are completely trouble free

Page 187: Chapter 8, Main Memory

187

Inverted page tables don’t support shared memory very well

• An inverted page table is a global structure for all frames in a system

• It that effectively maps one logical page belonging to one process to one physical frame

• This makes it difficult to support memory pages (frames) shared between different processes

• To support shared memory, it would be necessary to add linking to the table or add other data structures to the system

Page 188: Chapter 8, Main Memory

188

8.6 Segmentation

• The idea behind segmentation is that the application writer doesn’t view memory simply as a linear array of bytes

• Also, the actual relative physical location of different program modules or is not important

• Applications can be viewed in terms of logical program units

• Each separate logical unit could be identified by its base address in memory, and its length

Page 189: Chapter 8, Main Memory

189

• Segmentation supports the user view of memory

• A segmented address takes this form <segment id, offset into segment>

• The segment id translates into a base address• The segmented address then has to translate

into pages or whatever scheme is actually used to allocate memory

Page 190: Chapter 8, Main Memory

190

Implementation of segmentation

• A system with segmented addresses would have to support them in application software

• System implementations of compilation, linking, loading, and address resolution would all be adapted to use segmented addresses

• In a sense, segments may be reminiscent of simple contiguous allocation of blocks of memory in varying sizes

Page 191: Chapter 8, Main Memory

191

• Segments may also be thought of, very roughly, as (comparatively large) pages of varying size

• Just like with paging, hardware support in the MMU makes the translation possible

• The diagram on the next overhead shows how segmented addresses are resolved

Page 192: Chapter 8, Main Memory

192

Page 193: Chapter 8, Main Memory

193

• This is similar to one of the earliest diagrams showing in general how page addresses were resolved

• The segment table is like a set of base-limit pairs, one for each segment

• Just like with pages, in the long run you would probably want some sort of TLB support

Page 194: Chapter 8, Main Memory

194

• For the purposes of this brief introduction to the idea of segments, segments and pages are treated separately

• In real, modern systems with segmentation, the segments are subdivided into pages which are accessed through a paging mechanism

• In other words, segments are a layer in the memory addressing scheme that lies on top of the paging mechanism

Page 195: Chapter 8, Main Memory

195

Protection and sharing with segmentation

• The theory is that protection and sharing make more logical sense under a segmented scheme

• Instead of worrying about protection and sharing at a page level, the assumption is that the same protection and sharing decisions would logically apply to a complete segment

Page 196: Chapter 8, Main Memory

196

• In other words, protection is applied to semantic constructs like “data block” or “program block”

• Under a segmented scheme, semantically different blocks would be stored in different segments

• Similarly with sharing• If two processes need to share the same block, let

the block be stored in a given segment, and give both processes accesses to the segment

Page 197: Chapter 8, Main Memory

197

• Although perhaps clearer than paged sharing, segmented sharing doesn’t solve all of the problems of sharing

• Two processes may know the same, shared code by different symbolic names

• There has to be a mapping from the symbolic name to the base address of the allocated memory

Page 198: Chapter 8, Main Memory

198

• The memory space of the shared code will not be contiguous with the memory space of the processes that share it

• A process that shares code will generate addresses in its memory space

• Then when it enters the shared code, the execution, on behalf of the process, will generate addresses in the shared code memory space

Page 199: Chapter 8, Main Memory

199

• The system has to support the resolution of addresses when processes cross the boundary from unshared to shared code

• Potentially, ifs or jumps across boundaries have to be supported (from one address space to another) and the return from shared code has to go to the address space of whichever process called it

Page 200: Chapter 8, Main Memory

200

Segmentation and fragmentation

• Segmentation, in the sense that it’s like contiguous memory allocation, suffers from the problem of external fragmentation

• The difference is that a single process consists of multiple segments and each segment is loaded into contiguous memory

• The ultimate solution to this problem is to break the segments into pages

Page 201: Chapter 8, Main Memory

201

8.7 Example: The Intel Pentium

• The reality is that the Intel 8086 architecture has had segmented addressing from the beginning.

• The Motorola 68000 didn’t.• The following details are given in the same spirit

that the information about scheduling and priorities was given in the chapter on scheduling

• Namely, to show that real systems tend to have many disparate features, and overall they can be somewhat complex

Page 202: Chapter 8, Main Memory

202

• The following summary was prepared from a previous edition of the book, so it may not agree completely with the current edition

• However, don’t worry• There will be no test questions, for example,

which ask for detailed specifics of segmented addressing

• Any questions on this topic will be general or conceptual

Page 203: Chapter 8, Main Memory

203

• Some information about Intel addressing• The maximum size of a segment is 4GB (232)• The size of a page is 4KB (212)• That means a segment may consist of up to 220 or

1M of pages• The maximum number of segments per process is

16K (214)• This means in theory, if all maxima applied, a

process could have a huge address space (246)

Page 204: Chapter 8, Main Memory

204

• The logical address space of a process is divided into two partitions, each of up to 8K segments

• Partition 1 is private to the process. – Information about its segments are stored in the

local descriptor table• Partition 2 contains segments shared among

processes. – Information about these segments is stored in the

global descriptor table

Page 205: Chapter 8, Main Memory

205

• The first part of a logical address is known as a selector

• It consists of these parts:– 13 bits for segment id, s– 1 bit for global vs local, g– 2 bits for protections– 16 bits

Page 206: Chapter 8, Main Memory

206

• Within each segment, an address is paged• It takes two levels to hold the page table• The page address takes the form described

earlier:– 10 bits for outer page of page table– 10 bits for inner page of page table– 12 bits for offset– (At 4 bytes per page table entry, you can fit 210

entries into a 4KB page)

Page 207: Chapter 8, Main Memory

207

• Notice that you’ve got both 14 bits for segment id + global vs. local and 32 bits for page id

• This means that in a 32 bit architecture you can’t simultaneously have the maximum number of segments and the maximum number of pages

• There is a limit on how many segments total you can have, but there is flexibility in where they’re located in memory

Page 208: Chapter 8, Main Memory

208

• The diagram shown on the next overhead is supposed to summarize how a segmented logical address is resolved to a physical address

• Read it and weep• In between your tears, remember, you will not

have to know this for a test

Page 209: Chapter 8, Main Memory

209

Page 210: Chapter 8, Main Memory

210

The End