64
– 1 – 15-213, F’02 Conventional DRAM Organization d x w DRAM: d x w DRAM: dw total bits organized as d supercells of size w bits cols rows 0 1 2 3 0 1 2 3 internal row buffer 16 x 8 DRAM chip addr data supercell (2,1) 2 bits / 8 bits / memory controller (to CPU)

Conventional DRAM Organization

  • Upload
    levia

  • View
    28

  • Download
    1

Embed Size (px)

DESCRIPTION

Conventional DRAM Organization. d x w DRAM: dw total bits organized as d supercells of size w bits. 16 x 8 DRAM chip. cols. 0. 1. 2. 3. memory controller. 0. 2 bits /. addr. 1. rows. supercell (2,1). 2. (to CPU). 3. 8 bits /. data. internal row buffer. - PowerPoint PPT Presentation

Citation preview

Page 1: Conventional DRAM Organization

– 1 – 15-213, F’02

Conventional DRAM OrganizationConventional DRAM Organization

d x w DRAM:d x w DRAM: dw total bits organized as d supercells of size w bits

cols

rows

0 1 2 3

0

1

2

3

internal row buffer

16 x 8 DRAM chip

addr

data

supercell(2,1)

2 bits/

8 bits/

memorycontroller

(to CPU)

Page 2: Conventional DRAM Organization

– 2 – 15-213, F’02

Reading DRAM Supercell (2,1)Reading DRAM Supercell (2,1)

Step 1(a): Row access strobe (Step 1(a): Row access strobe (RASRAS) selects row 2.) selects row 2.

cols

rows

RAS = 20 1 2 3

0

1

2

internal row buffer

16 x 8 DRAM chip

3

addr

data

2/

8/

memorycontroller

Step 1(b): Row 2 copied from DRAM array to row buffer.Step 1(b): Row 2 copied from DRAM array to row buffer.

Page 3: Conventional DRAM Organization

– 3 – 15-213, F’02

Reading DRAM Supercell (2,1)Reading DRAM Supercell (2,1)Step 2(a): Column access strobe (Step 2(a): Column access strobe (CASCAS) selects column 1.) selects column 1.

cols

rows

0 1 2 3

0

1

2

3

internal row buffer

16 x 8 DRAM chip

CAS = 1

addr

data

2/

8/

memorycontroller

Step 2(b): Supercell (2,1) copied from buffer to data lines, Step 2(b): Supercell (2,1) copied from buffer to data lines, and eventually back to the CPU.and eventually back to the CPU.

supercell (2,1)

supercell (2,1)

To CPU

Page 4: Conventional DRAM Organization

– 4 – 15-213, F’02

Memory ModulesMemory Modules

: supercell (i,j)

64 MB memory moduleconsisting ofeight 8Mx8 DRAMs

addr (row = i, col = j)

Memorycontroller

DRAM 7

DRAM 0

031 78151623243263 394047485556

64-bit doubleword at main memory address A

bits0-7

bits8-15

bits16-23

bits24-31

bits32-39

bits40-47

bits48-55

bits56-63

64-bit doubleword

031 78151623243263 394047485556

64-bit doubleword at main memory address A

Page 5: Conventional DRAM Organization

– 5 – 15-213, F’02

Typical Bus Structure Connecting CPU and MemoryTypical Bus Structure Connecting CPU and MemoryA A busbus is a collection of parallel wires that carry is a collection of parallel wires that carry

address, data, and control signals.address, data, and control signals.

Buses are typically shared by multiple devices.Buses are typically shared by multiple devices.

mainmemory

I/O bridge

bus interface

ALU

register file

CPU chip

system bus memory bus

Page 6: Conventional DRAM Organization

– 6 – 15-213, F’02

Memory Read Transaction (1)Memory Read Transaction (1)

CPU places address A on the memory bus.CPU places address A on the memory bus.

ALU

register file

bus interface

A0

Ax

main memoryI/O bridge

%eax

Load operation: movl A, %eax

Page 7: Conventional DRAM Organization

– 7 – 15-213, F’02

Memory Read Transaction (2)Memory Read Transaction (2)

Main memory reads A from the memory bus, retreives Main memory reads A from the memory bus, retreives word x, and places it on the bus.word x, and places it on the bus.

ALU

register file

bus interface

x 0

Ax

main memory

%eax

I/O bridge

Load operation: movl A, %eax

Page 8: Conventional DRAM Organization

– 8 – 15-213, F’02

Memory Read Transaction (3)Memory Read Transaction (3)

CPU read word x from the bus and copies it into CPU read word x from the bus and copies it into register %eax.register %eax.

xALU

register file

bus interface x

main memory0

A

%eax

I/O bridge

Load operation: movl A, %eax

Page 9: Conventional DRAM Organization

– 9 – 15-213, F’02

Memory Write Transaction (1)Memory Write Transaction (1)

CPU places address A on bus. Main memory reads it CPU places address A on bus. Main memory reads it and waits for the corresponding data word to arrive.and waits for the corresponding data word to arrive.

yALU

register file

bus interface

A

main memory0

A

%eax

I/O bridge

Store operation: movl %eax, A

Page 10: Conventional DRAM Organization

– 10 – 15-213, F’02

Memory Write Transaction (2)Memory Write Transaction (2)

CPU places data word y on the bus.CPU places data word y on the bus.

yALU

register file

bus interface

y

main memory

0

A

%eax

I/O bridge

Store operation: movl %eax, A

Page 11: Conventional DRAM Organization

– 11 – 15-213, F’02

Memory Write Transaction (3)Memory Write Transaction (3)

Main memory read data word y from the bus and stores Main memory read data word y from the bus and stores it at address A.it at address A.

yALU

register file

bus interface y

main memory0

A

%eax

I/O bridge

Store operation: movl %eax, A

Page 12: Conventional DRAM Organization

– 12 – 15-213, F’02

Disk GeometryDisk Geometry

Disks consist of Disks consist of plattersplatters, each with two , each with two surfacessurfaces..

Each surface consists of concentric rings called Each surface consists of concentric rings called trackstracks..

Each track consists of Each track consists of sectorssectors separated by separated by gapsgaps..

spindle

surfacetracks

track k

sectors

gaps

Page 13: Conventional DRAM Organization

– 13 – 15-213, F’02

I/O BusI/O Bus

mainmemory

I/O bridge

bus interface

ALU

register file

CPU chip

system bus memory bus

disk controller

graphicsadapter

USBcontroller

mousekeyboard monitor

disk

I/O bus Expansion slots forother devices suchas network adapters.

Page 14: Conventional DRAM Organization

– 14 – 15-213, F’02

Reading a Disk Sector (1)Reading a Disk Sector (1)

mainmemory

ALU

register file

CPU chip

disk controller

graphicsadapter

USBcontroller

mousekeyboard monitor

disk

I/O bus

bus interface

CPU initiates a disk read by writing a command, logical block number, and destination memory address to a port (address) associated with disk controller.

Page 15: Conventional DRAM Organization

– 15 – 15-213, F’02

Reading a Disk Sector (2)Reading a Disk Sector (2)

mainmemory

ALU

register file

CPU chip

disk controller

graphicsadapter

USBcontroller

mousekeyboard monitor

disk

I/O bus

bus interface

Disk controller reads the sector and performs a direct memory access (DMA) transfer into main memory.

Page 16: Conventional DRAM Organization

– 16 – 15-213, F’02

Reading a Disk Sector (3)Reading a Disk Sector (3)

mainmemory

ALU

register file

CPU chip

disk controller

graphicsadapter

USBcontroller

mousekeyboard monitor

disk

I/O bus

bus interface

When the DMA transfer completes, the disk controller notifies the CPU with an interrupt (i.e., asserts a special “interrupt” pin on the CPU)

Page 17: Conventional DRAM Organization

– 17 – 15-213, F’02

An Example Memory HierarchyAn Example Memory Hierarchy

registers

on-chip L1cache (SRAM)

main memory(DRAM)

local secondary storage(local disks)

Larger, slower,

and cheaper (per byte)storagedevices

remote secondary storage(distributed file systems, Web servers)

Local disks hold files retrieved from disks on remote network servers.

Main memory holds disk blocks retrieved from local disks.

off-chip L2cache (SRAM)

L1 cache holds cache lines retrieved from the L2 cache memory.

CPU registers hold words retrieved from L1 cache.

L2 cache holds cache lines retrieved from main memory.

L0:

L1:

L2:

L3:

L4:

L5:

Smaller,faster,and

costlier(per byte)storage devices

Page 18: Conventional DRAM Organization

– 18 – 15-213, F’02

Caching in a Memory HierarchyCaching in a Memory Hierarchy

0 1 2 3

4 5 6 7

8 9 10 11

12 13 14 15

Larger, slower, cheaper storagedevice at level k+1 is partitionedinto blocks.

Data is copied betweenlevels in block-sized transfer units

8 9 14 3Smaller, faster, more expensivedevice at level k caches a subset of the blocks from level k+1

Level k:

Level k+1: 4

4

4 10

10

10

Page 19: Conventional DRAM Organization

– 19 – 15-213, F’02

Request14

Request12

General Caching ConceptsGeneral Caching Concepts

Program needs object d, which is stored Program needs object d, which is stored in some block b.in some block b.

Cache hitCache hit Program finds b in the cache at level

k. E.g., block 14.

Cache missCache miss b is not at level k, so level k cache

must fetch it from level k+1. E.g., block 12.

If level k cache is full, then some current block must be replaced (evicted). Which one is the “victim”?

Placement policy: where can the new block go? E.g., b mod 4

Replacement policy: which block should be evicted? E.g., LRU

9 3

0 1 2 3

4 5 6 7

8 9 10 11

12 13 14 15

Level k:

Level k+1:

1414

12

14

4*

4*12

12

0 1 2 3

Request12

4*4*12

Page 20: Conventional DRAM Organization

– 20 – 15-213, F’02

Cache MemoriesCache MemoriesCache memories are small, fast SRAM-based memories Cache memories are small, fast SRAM-based memories

managed automatically in hardware. managed automatically in hardware. Hold frequently accessed blocks of main memory

CPU looks first for data in L1, then in L2, then in main CPU looks first for data in L1, then in L2, then in main memory.memory.

Typical bus structure:Typical bus structure:

mainmemory

I/Obridge

bus interfaceL2 cache

ALU

register file

CPU chip

cache bus system bus memory bus

L1 cache

Page 21: Conventional DRAM Organization

– 21 – 15-213, F’02

Inserting an L1 Cache Between the CPU and Main MemoryInserting an L1 Cache Between the CPU and Main Memory

a b c dblock 10

p q r sblock 21

...

...

w x y zblock 30

...

The big slow main memoryhas room for many 4-wordblocks.

The small fast L1 cache has roomfor two 4-word blocks.

The tiny, very fast CPU register filehas room for four 4-byte words.

The transfer unit betweenthe cache and main memory is a 4-word block(16 bytes).

The transfer unit betweenthe CPU register file and the cache is a 4-byte block.

line 0

line 1

Page 22: Conventional DRAM Organization

– 22 – 15-213, F’02

General Org of a Cache MemoryGeneral Org of a Cache Memory

• • • B–110

• • • B–110

valid

valid

tag

tagset 0:

B = 2b bytesper cache block

E lines per set

S = 2s sets

t tag bitsper line

1 valid bitper line

Cache size: C = B x E x S data bytes

• • •

• • • B–110

• • • B–110

valid

valid

tag

tagset 1: • • •

• • • B–110

• • • B–110

valid

valid

tag

tagset S-1: • • •

• • •

Cache is an arrayof sets.

Each set containsone or more lines.

Each line holds ablock of data.

Page 23: Conventional DRAM Organization

– 23 – 15-213, F’02

Addressing CachesAddressing Caches

t bits s bits b bits

0m-1

<tag> <set index> <block offset>

Address A:

• • • B–110

• • • B–110

v

v

tag

tagset 0: • • •

• • • B–110

• • • B–110

v

v

tag

tagset 1: • • •

• • • B–110

• • • B–110

v

v

tag

tagset S-1: • • •

• • •

The word at address A is in the cache ifthe tag bits in one of the <valid> lines in set <set index> match <tag>.

The word contents begin at offset <block offset> bytes from the beginning of the block.

Page 24: Conventional DRAM Organization

– 24 – 15-213, F’02

Direct-Mapped CacheDirect-Mapped Cache

Simplest kind of cacheSimplest kind of cache

Characterized by exactly one line per set.Characterized by exactly one line per set.

valid

valid

valid

tag

tag

tag

• • •

set 0:

set 1:

set S-1:

E=1 lines per setcache block

cache block

cache block

Page 25: Conventional DRAM Organization

– 25 – 15-213, F’02

Accessing Direct-Mapped CachesAccessing Direct-Mapped Caches

Set selectionSet selection Use the set index bits to determine the set of interest.

valid

valid

valid

tag

tag

tag

• • •

set 0:

set 1:

set S-1:t bits s bits

0 0 0 0 10m-1

b bits

tag set index block offset

selected set

cache block

cache block

cache block

Page 26: Conventional DRAM Organization

– 26 – 15-213, F’02

Accessing Direct-Mapped CachesAccessing Direct-Mapped Caches

Line matching and word selectionLine matching and word selection Line matching: Find a valid line in the selected set with a

matching tag Word selection: Then extract the word

1

t bits s bits100i0110

0m-1

b bits

tag set index block offset

selected set (i):

(3) If (1) and (2), then cache hit,

and block offset selects

starting byte.

=1? (1) The valid bit must be set

= ?(2) The tag bits in the cache

line must match thetag bits in the address

0110 w3w0 w1 w2

30 1 2 74 5 6

Page 27: Conventional DRAM Organization

– 27 – 15-213, F’02

Direct-Mapped Cache SimulationDirect-Mapped Cache SimulationM=16 byte addresses, B=2 bytes/block, S=4 sets, E=1 entry/set

Address trace (reads):0 [00002], 1 [00012], 13 [11012], 8 [10002], 0 [00002]

xt=1 s=2 b=1

xx x

1 0 m[1] m[0]

v tag data0 [00002] (miss)

(1)

1 0 m[1] m[0]

v tag data

1 1 m[13] m[12]

13 [11012] (miss)

(3)

1 1 m[9] m[8]

v tag data8 [10002] (miss)

(4)

1 0 m[1] m[0]

v tag data

1 1 m[13] m[12]

0 [00002] (miss)

(5)

0 M[0-1]1

1 M[12-13]1

1 M[8-9]1

1 M[12-13]1

0 M[0-1]1

1 M[12-13]1

0 M[0-1]1

Page 28: Conventional DRAM Organization

– 28 – 15-213, F’02

Why Use Middle Bits as Index?Why Use Middle Bits as Index?

High-Order Bit IndexingHigh-Order Bit Indexing Adjacent memory lines would map

to same cache entry Poor use of spatial locality

Middle-Order Bit IndexingMiddle-Order Bit Indexing Consecutive memory lines map to

different cache lines Can hold C-byte region of address

space in cache at one time

4-line Cache High-OrderBit Indexing

Middle-OrderBit Indexing

00011011

0000000100100011010001010110011110001001101010111100110111101111

0000000100100011010001010110011110001001101010111100110111101111

Page 29: Conventional DRAM Organization

– 29 – 15-213, F’02

Set Associative CachesSet Associative Caches

Characterized by more than one line per setCharacterized by more than one line per set

valid tagset 0: E=2 lines per set

set 1:

set S-1:

• • •

cache block

valid tag cache block

valid tag cache block

valid tag cache block

valid tag cache block

valid tag cache block

Page 30: Conventional DRAM Organization

– 30 – 15-213, F’02

Accessing Set Associative CachesAccessing Set Associative Caches

Set selectionSet selection identical to direct-mapped cache

valid

valid

tag

tagset 0:

valid

valid

tag

tagset 1:

valid

valid

tag

tagset S-1:

• • •

t bits s bits0 0 0 0 1

0m-1

b bits

tag set index block offset

Selected set

cache block

cache block

cache block

cache block

cache block

cache block

Page 31: Conventional DRAM Organization

– 31 – 15-213, F’02

Accessing Set Associative CachesAccessing Set Associative Caches

Line matching and word selectionLine matching and word selection must compare the tag in each valid line in the selected set.

1 0110 w3w0 w1 w2

1 1001

t bits s bits100i0110

0m-1

b bits

tag set index block offset

selected set (i):

=1? (1) The valid bit must be set.

= ?(2) The tag bits in one of the cache lines must

match the tag bits inthe address

(3) If (1) and (2), then cache hit, and

block offset selects starting byte.

30 1 2 74 5 6

Page 32: Conventional DRAM Organization

– 32 – 15-213, F’02

E=2, B=4, S=8. Words 0x0E34, 0x0DD5, 0x1FE4E=2, B=4, S=8. Words 0x0E34, 0x0DD5, 0x1FE4

And list memory address that will hit in Set 3.And list memory address that will hit in Set 3.

Set tag V B0 B1 B2 B3 tag V B0 B1 B2 B3Set tag V B0 B1 B2 B3 tag V B0 B1 B2 B3

0 09 1 86 30 3F 10 00 0 -- -- -- --0 09 1 86 30 3F 10 00 0 -- -- -- --

11 45 1 60 4F E0 23 38 1 00 BC 0B 3745 1 60 4F E0 23 38 1 00 BC 0B 37

22 EB 0 -- -- -- -- 0B 0 -- -- -- --EB 0 -- -- -- -- 0B 0 -- -- -- --

33 06 0 -- -- -- -- 32 1 12 08 7B AD06 0 -- -- -- -- 32 1 12 08 7B AD

44 C7 1 06 78 07 C5 05 1 40 67 C2 3BC7 1 06 78 07 C5 05 1 40 67 C2 3B

55 71 1 0B DE 18 4B 6E 0 -- -- -- --71 1 0B DE 18 4B 6E 0 -- -- -- --

66 91 1 A0 B7 26 2D F0 0 -- -- -- --91 1 A0 B7 26 2D F0 0 -- -- -- --

77 46 0 -- -- -- -- DE 1 12 C0 88 3746 0 -- -- -- -- DE 1 12 C0 88 37

Page 33: Conventional DRAM Organization

– 33 – 15-213, F’02

Multi-Level CachesMulti-Level Caches

Options: separate Options: separate datadata and and instruction cachesinstruction caches, or a , or a unified cacheunified cache

size:speed:$/Mbyte:line size:

200 B3 ns

8 B

8-64 KB3 ns

32 B

128 MB DRAM60 ns$1.50/MB8 KB

30 GB8 ms$0.05/MB

larger, slower, cheaper

MemoryMemory

L1 d-cache

RegsUnified

L2 Cache

UnifiedL2

Cache

Processor

1-4MB SRAM6 ns$100/MB32 B

L1 i-cache

diskdisk

Page 34: Conventional DRAM Organization

– 34 – 15-213, F’02

Motivations for Virtual MemoryMotivations for Virtual MemoryUse Physical DRAM as a Cache for the DiskUse Physical DRAM as a Cache for the Disk

Address space of a process can exceed physical memory size Sum of address spaces of multiple processes can exceed physical

memory

Simplify Memory ManagementSimplify Memory Management Multiple processes resident in main memory.

Each process with its own address space

Only “active” code and data is actually in memory Allocate more memory to process as needed.

Provide ProtectionProvide Protection One process can’t interfere with another.

because they operate in different address spaces.

User process cannot access privileged information different sections of address spaces have different permissions.

Page 35: Conventional DRAM Organization

– 35 – 15-213, F’02

Motivation #1: DRAM a “Cache” for DiskMotivation #1: DRAM a “Cache” for DiskFull address space is quite large:Full address space is quite large:

32-bit addresses: ~4,000,000,000 (4 billion) bytes 64-bit addresses: ~16,000,000,000,000,000,000 (16 quintillion)

bytes

Disk storage is ~300X cheaper than DRAM storageDisk storage is ~300X cheaper than DRAM storage 80 GB of DRAM: ~ $33,000 80 GB of disk: ~ $110

To access large amounts of data in a cost-effective To access large amounts of data in a cost-effective manner, the bulk of the data must be stored on diskmanner, the bulk of the data must be stored on disk

1GB: ~$200 80 GB: ~$110

4 MB: ~$500

DiskDRAMSRAM

Page 36: Conventional DRAM Organization

– 36 – 15-213, F’02

Levels in Memory HierarchyLevels in Memory Hierarchy

CPUCPU

regsregs

Cache

MemoryMemory diskdisk

size:speed:$/Mbyte:line size:

32 B1 ns

8 B

Register Cache Memory Disk Memory

32 KB-4MB2 ns$125/MB32 B

1024 MB30 ns$0.20/MB4 KB

100 GB8 ms$0.001/MB

larger, slower, cheaper

8 B 32 B 4 KB

cache virtual memory

Page 37: Conventional DRAM Organization

– 37 – 15-213, F’02

DRAM vs. SRAM as a “Cache”DRAM vs. SRAM as a “Cache”

DRAM vs. disk is more extreme than SRAM vs. DRAMDRAM vs. disk is more extreme than SRAM vs. DRAM Access latencies:

DRAM ~10X slower than SRAMDisk ~100,000X slower than DRAM

Importance of exploiting spatial locality:First byte is ~100,000X slower than successive bytes on disk

» vs. ~4X improvement for page-mode vs. regular accesses to DRAM

Bottom line: Design decisions made for DRAM caches driven by enormous cost

of misses

DRAMSRAM Disk

Page 38: Conventional DRAM Organization

– 38 – 15-213, F’02

Impact of Properties on DesignImpact of Properties on DesignIf DRAM was to be organized similar to an SRAM cache, how would we set If DRAM was to be organized similar to an SRAM cache, how would we set

the following design parameters?the following design parameters? Line size?

Large, since disk better at transferring large blocks

Associativity?High, to mimimize miss rate

Write through or write back?Write back, since can’t afford to perform small writes to disk

What would the impact of these choices be on:What would the impact of these choices be on: miss rate

Extremely low. << 1%

hit timeMust match cache/DRAM performance

miss latencyVery high. ~20ms

tag storage overheadLow, relative to block size

Page 39: Conventional DRAM Organization

– 39 – 15-213, F’02

A System with Physical Memory OnlyA System with Physical Memory Only

Examples:Examples: most Cray machines, early PCs, nearly all embedded systems, etc.

Addresses generated by the CPU correspond directly to bytes in physical memory

CPU

0:1:

N-1:

Memory

PhysicalAddresses

Page 40: Conventional DRAM Organization

– 40 – 15-213, F’02

A System with Virtual MemoryA System with Virtual Memory

Examples:Examples: workstations, servers, modern PCs, etc.

Address Translation: Hardware converts virtual addresses to physical addresses via OS-managed lookup table (page table)

CPU

0:1:

N-1:

Memory

0:1:

P-1:

Page Table

Disk

VirtualAddresses

PhysicalAddresses

Page 41: Conventional DRAM Organization

– 41 – 15-213, F’02

Page Faults (like “Cache Misses”)Page Faults (like “Cache Misses”)What if an object is on disk rather than in memory?What if an object is on disk rather than in memory?

Page table entry indicates virtual address not in memory OS exception handler invoked to move data from disk into

memorycurrent process suspends, others can resumeOS has full control over placement, etc.

CPU

Memory

Page Table

Disk

VirtualAddresses

PhysicalAddresses

CPU

Memory

Page Table

Disk

VirtualAddresses

PhysicalAddresses

Before fault After fault

Page 42: Conventional DRAM Organization

– 42 – 15-213, F’02

Servicing a Page FaultServicing a Page Fault

Processor Signals ControllerProcessor Signals Controller Read block of length P

starting at disk address X and store starting at memory address Y

Read OccursRead Occurs Direct Memory Access (DMA) Under control of I/O controller

I / O Controller Signals I / O Controller Signals CompletionCompletion Interrupt processor OS resumes suspended

process

diskDiskdiskDisk

Memory-I/O busMemory-I/O bus

ProcessorProcessor

CacheCache

MemoryMemoryI/O

controller

I/Ocontroller

Reg

(2) DMA Transfer

(1) Initiate Block Read

(3) Read Done

Page 43: Conventional DRAM Organization

– 43 – 15-213, F’02

Motivation #2: Memory ManagementMotivation #2: Memory ManagementMultiple processes can reside in physical memory.Multiple processes can reside in physical memory.

How do we resolve address conflicts?How do we resolve address conflicts? what if two processes access something at the same

address?

kernel virtual memory

Memory mapped region forshared libraries

runtime heap (via malloc)

program text (.text)

initialized data (.data)

uninitialized data (.bss)

stack

forbidden0

%esp

memory invisible to user code

the “brk” ptr

Linux/x86 process

memory

image

Page 44: Conventional DRAM Organization

– 44 – 15-213, F’02

Virtual Address Space for Process 1:

Physical Address Space (DRAM)

VP 1VP 2

PP 2

Address Translation0

0

N-1

0

N-1M-1

VP 1VP 2

PP 7

PP 10

(e.g., read/only library code)

Solution: Separate Virt. Addr. SpacesSolution: Separate Virt. Addr. Spaces Virtual and physical address spaces divided into equal-sized

blocks blocks are called “pages” (both virtual and physical)

Each process has its own virtual address spaceoperating system controls how virtual pages as assigned to

physical memory

...

...

Virtual Address Space for Process 2:

Page 45: Conventional DRAM Organization

– 45 – 15-213, F’02

Motivation #3: ProtectionMotivation #3: ProtectionPage table entry contains access rights informationPage table entry contains access rights information

hardware enforces this protection (trap into OS if violation occurs)

Page Tables

Process i:

Physical AddrRead? Write?

PP 9Yes No

PP 4Yes Yes

XXXXXXX No No

VP 0:

VP 1:

VP 2:•••

•••

•••

Process j:

0:1:

N-1:

Memory

Physical AddrRead? Write?

PP 6Yes Yes

PP 9Yes No

XXXXXXX No No•••

•••

•••

VP 0:

VP 1:

VP 2:

Page 46: Conventional DRAM Organization

– 46 – 15-213, F’02

VM Address TranslationVM Address Translation

Virtual Address SpaceVirtual Address Space V = {0, 1, …, N–1}

Physical Address SpacePhysical Address Space P = {0, 1, …, M–1} M < N

Address TranslationAddress Translation MAP: V P U {} For virtual address a:

MAP(a) = a’ if data at virtual address a at physical address a’ in P

MAP(a) = if data at virtual address a not in physical memory» Either invalid or stored on disk

Page 47: Conventional DRAM Organization

– 47 – 15-213, F’02

VM Address Translation: HitVM Address Translation: Hit

Processor

HardwareAddr TransMechanism

MainMemorya

a'

physical addressvirtual address part of the on-chipmemory mgmt unit (MMU)

Page 48: Conventional DRAM Organization

– 48 – 15-213, F’02

VM Address Translation: MissVM Address Translation: Miss

Processor

HardwareAddr TransMechanism

faulthandler

MainMemory

Secondary memorya

a'

page fault

physical addressOS performsthis transfer(only if miss)

virtual address part of the on-chipmemory mgmt unit (MMU)

Page 49: Conventional DRAM Organization

– 49 – 15-213, F’02

virtual page number page offset virtual address

physical page number page offset physical address0p–1

address translation

pm–1

n–1 0p–1p

Page offset bits don’t change as a result of translation

VM Address TranslationVM Address TranslationParametersParameters

P = 2p = page size (bytes). N = 2n = Virtual address limit M = 2m = Physical address limit

Page 50: Conventional DRAM Organization

– 50 – 15-213, F’02

Page TablesPage TablesMemory resident

page table(physical page

or disk address) Physical Memory

Disk Storage(swap file orregular file system file)

Valid

1

1

111

1

10

0

0

Virtual PageNumber

Page 51: Conventional DRAM Organization

– 51 – 15-213, F’02

Address Translation via Page TableAddress Translation via Page Table

virtual page number (VPN) page offset

virtual address

physical page number (PPN) page offset

physical address

0p–1pm–1

n–1 0p–1ppage table base register

if valid=0then pagenot in memory

valid physical page number (PPN)access

VPN acts astable index

Page 52: Conventional DRAM Organization

– 52 – 15-213, F’02

Page Table OperationPage Table OperationTranslationTranslation

Separate (set of) page table(s) per process VPN forms index into page table (points to a page table entry)

virtual page number (VPN) page offset

virtual address

physical page number (PPN) page offset

physical address

0p–1pm–1

n–1 0p–1ppage table base register

if valid=0then pagenot in memory

valid physical page number (PPN)access

VPN acts astable index

virtual page number (VPN) page offset

virtual address

physical page number (PPN) page offset

physical address

0p–1pm–1

n–1 0p–1ppage table base register

if valid=0then pagenot in memory

valid physical page number (PPN)access

VPN acts astable index

Page 53: Conventional DRAM Organization

– 53 – 15-213, F’02

Page Table OperationPage Table OperationComputing Physical AddressComputing Physical Address

Page Table Entry (PTE) provides information about page if (valid bit = 1) then the page is in memory.

» Use physical page number (PPN) to construct address if (valid bit = 0) then the page is on disk

» Page fault

virtual page number (VPN) page offset

virtual address

physical page number (PPN) page offset

physical address

0p–1pm–1

n–1 0p–1ppage table base register

if valid=0then pagenot in memory

valid physical page number (PPN)access

VPN acts astable index

virtual page number (VPN) page offset

virtual address

physical page number (PPN) page offset

physical address

0p–1pm–1

n–1 0p–1ppage table base register

if valid=0then pagenot in memory

valid physical page number (PPN)access

VPN acts astable index

Page 54: Conventional DRAM Organization

– 54 – 15-213, F’02

Page Table OperationPage Table OperationChecking ProtectionChecking Protection

Access rights field indicate allowable accesse.g., read-only, read-write, execute-only typically support multiple protection modes (e.g., kernel vs. user)

Protection violation fault if user doesn’t have necessary permission

virtual page number (VPN) page offset

virtual address

physical page number (PPN) page offset

physical address

0p–1pm–1

n–1 0p–1ppage table base register

if valid=0then pagenot in memory

valid physical page number (PPN)access

VPN acts astable index

virtual page number (VPN) page offset

virtual address

physical page number (PPN) page offset

physical address

0p–1pm–1

n–1 0p–1ppage table base register

if valid=0then pagenot in memory

valid physical page number (PPN)access

VPN acts astable index

Page 55: Conventional DRAM Organization

– 55 – 15-213, F’02

CPUTrans-lation

Cache MainMemory

VA PA miss

hitdata

Integrating VM and CacheIntegrating VM and Cache

Most Caches “Physically Addressed”Most Caches “Physically Addressed” Accessed by physical addresses Allows multiple processes to have blocks in cache at same time Allows multiple processes to share pages Cache doesn’t need to be concerned with protection issues

Access rights checked as part of address translation

Perform Address Translation Before Cache LookupPerform Address Translation Before Cache Lookup But this could involve a memory access itself (of the PTE) Of course, page table entries can also become cached

Page 56: Conventional DRAM Organization

– 56 – 15-213, F’02

CPUTLB

LookupCache Main

Memory

VA PA miss

hit

data

Trans-lation

hit

miss

Speeding up Translation with a TLBSpeeding up Translation with a TLB

““Translation Lookaside Buffer” (TLB)Translation Lookaside Buffer” (TLB) Small hardware cache in MMU Maps virtual page numbers to physical page numbers Contains complete page table entries for small number of

pages

Page 57: Conventional DRAM Organization

– 57 – 15-213, F’02

Simple Memory System Page TableSimple Memory System Page Table

Only show first 16 entries

VPNVPN PPNPPN ValidValid VPNVPN PPNPPN ValidValid

0000 2828 11 0808 1313 11

0101 –– 00 0909 1717 11

0202 3333 11 0A0A 0909 11

0303 0202 11 0B0B –– 00

0404 –– 00 0C0C –– 00

0505 1616 11 0D0D 2D2D 11

0606 –– 00 0E0E 1111 11

0707 –– 00 0F0F 0D0D 11

Page 58: Conventional DRAM Organization

– 58 – 15-213, F’02

Simple Memory System TLBSimple Memory System TLBTLBTLB

16 entries 4-way associative

13 12 11 10 9 8 7 6 5 4 3 2 1 0

VPOVPN

TLBITLBT

SetSet TagTag PPNPPN ValidValid TagTag PPNPPN ValidValid TagTag PPNPPN ValidValid TagTag PPNPPN ValidValid

00 0303 –– 00 0909 0D0D 11 0000 –– 00 0707 0202 11

11 0303 2D2D 11 0202 –– 00 0404 –– 00 0A0A –– 00

22 0202 –– 00 0808 –– 00 0606 –– 00 0303 –– 00

33 0707 –– 00 0303 0D0D 11 0A0A 3434 11 0202 –– 00

Page 59: Conventional DRAM Organization

– 59 – 15-213, F’02

Simple Memory System ExampleSimple Memory System Example

AddressingAddressing 14-bit virtual addresses 12-bit physical address Page size = 64 bytes

13 12 11 10 9 8 7 6 5 4 3 2 1 0

11 10 9 8 7 6 5 4 3 2 1 0

VPO

PPOPPN

VPN

(Virtual Page Number) (Virtual Page Offset)

(Physical Page Number) (Physical Page Offset)

Page 60: Conventional DRAM Organization

– 60 – 15-213, F’02

Simple Memory System CacheSimple Memory System CacheCacheCache

16 lines 4-byte line size Direct mapped

11 10 9 8 7 6 5 4 3 2 1 0

PPOPPN

COCICT

IdxIdx TagTag ValidValid B0B0 B1B1 B2B2 B3B3 IdxIdx TagTag ValidValid B0B0 B1B1 B2B2 B3B3

00 1919 11 9999 1111 2323 1111 88 2424 11 3A3A 0000 5151 8989

11 1515 00 –– –– –– –– 99 2D2D 00 –– –– –– ––

22 1B1B 11 0000 0202 0404 0808 AA 2D2D 11 9393 1515 DADA 3B3B

33 3636 00 –– –– –– –– BB 0B0B 00 –– –– –– ––

44 3232 11 4343 6D6D 8F8F 0909 CC 1212 00 –– –– –– ––

55 0D0D 11 3636 7272 F0F0 1D1D DD 1616 11 0404 9696 3434 1515

66 3131 00 –– –– –– –– EE 1313 11 8383 7777 1B1B D3D3

77 1616 11 1111 C2C2 DFDF 0303 FF 1414 00 –– –– –– ––

Page 61: Conventional DRAM Organization

– 61 – 15-213, F’02

Address Translation Example #1Address Translation Example #1

Virtual Address Virtual Address 0x03D40x03D4

VPN ___ TLBI ___ TLBT ____ TLB Hit? __ Page Fault? __ PPN: ____

Physical AddressPhysical Address

Offset ___ CI___ CT ____ Hit? __ Byte: ____

13 12 11 10 9 8 7 6 5 4 3 2 1 0

VPOVPN

TLBITLBT

11 10 9 8 7 6 5 4 3 2 1 0

PPOPPN

COCICT

Page 62: Conventional DRAM Organization

– 62 – 15-213, F’02

Address Translation Example #2Address Translation Example #2

Virtual Address Virtual Address 0x0B8F0x0B8F

VPN ___ TLBI ___ TLBT ____ TLB Hit? __ Page Fault? __ PPN: ____

Physical AddressPhysical Address

Offset ___ CI___ CT ____ Hit? __ Byte: ____

13 12 11 10 9 8 7 6 5 4 3 2 1 0

VPOVPN

TLBITLBT

11 10 9 8 7 6 5 4 3 2 1 0

PPOPPN

COCICT

Page 63: Conventional DRAM Organization

– 63 – 15-213, F’02

Address Translation Example #3Address Translation Example #3

Virtual Address Virtual Address 0x00400x0040

VPN ___ TLBI ___ TLBT ____ TLB Hit? __ Page Fault? __ PPN: ____

Physical AddressPhysical Address

Offset ___ CI___ CT ____ Hit? __ Byte: ____

13 12 11 10 9 8 7 6 5 4 3 2 1 0

VPOVPN

TLBITLBT

11 10 9 8 7 6 5 4 3 2 1 0

PPOPPN

COCICT

Page 64: Conventional DRAM Organization

– 64 – 15-213, F’02

Multi-Level Page TablesMulti-Level Page Tables

Given:Given: 4KB (212) page size 32-bit address space 4-byte PTE

Problem:Problem: Would need a 4 MB page table!

220 *4 bytes

Common solutionCommon solution multi-level page tables e.g., 2-level table (P6)

Level 1 table: 1024 entries, each of which points to a Level 2 page table.

Level 2 table: 1024 entries, each of which points to a page

Level 1

Table

...

Level 2

Tables