19
July 2005 Computer Architecture, Memory System Design Slide 1 Part V Memory System Design

Part V Memory System Design

  • Upload
    halia

  • View
    34

  • Download
    0

Embed Size (px)

DESCRIPTION

Part V Memory System Design. V Memory System Design. Design problem – We want a memory unit that: Can keep up with the CPU’s processing speed Has enough capacity for programs and data Is inexpensive, reliable, and energy-efficient. 20 Virtual Memory and Paging. - PowerPoint PPT Presentation

Citation preview

Page 1: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 1

Part VMemory System Design

Page 2: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 2

V Memory System Design

Topics in This Part

Chapter 17 Main Memory Concepts

Chapter 18 Cache Memory Organization

Chapter 19 Mass Memory Concepts

Chapter 20 Virtual Memory and Paging

Design problem – We want a memory unit that:• Can keep up with the CPU’s processing speed• Has enough capacity for programs and data• Is inexpensive, reliable, and energy-efficient

Page 3: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 3

20 Virtual Memory and Paging

Managing data transfers between main & mass is cumbersome• Virtual memory automates this process• Key to virtual memory’s success is the same as for cache

Topics in This Chapter

20.1 The Need for Virtual Memory

20.2 Address Translation in Virtual Memory

20.3 Translation Lookaside Buffer

20.4 Page Placement and Replacement

20.5 Main and Mass Memories

20.6 Improving Virtual Memory Performance

Page 4: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 4

20.1 The Need for Virtual Memory

Fig. 20.1 Program segments in main memory and on disk.

Program and data on several disk tracks

System

Stack

Active pieces of program and data in memory

Unused space

Page 5: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 5

Fig. 20.2 Data movement in a memory hierarchy.

Memory Hierarchy: The Big Picture

Pages Lines

Words

Registers

Main memory

Cache

Virtual memory

(transferred explicitly

via load/store) (transferred automatically

upon cache miss) (transferred automatically

upon page fault)

Page 6: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 6

20.2 Address Translation in Virtual Memory

Fig. 20.3 Virtual-to-physical address translation parameters.

Virtual address

Physical address

Physical page number

Virtual page number Offset in page

Offset in page

Address translation

P bits

P bits

V P bits

M P bits

Example 20.1

Determine the parameters in Fig. 20.3 for 32-bit virtual addresses, 4 KB pages, and 128 MB byte-addressable main memory.

Solution: Physical addresses are 27 b, byte offset in page is 12 b; thus, virtual (physical) page numbers are 32 – 12 = 20 b (15 b)

Page 7: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 7 

Page Tables and Address Translation

Fig. 20.4 The role of page table in the virtual-to-physical address translation process.

Page table

Main memory

Valid bits

Page table register

Virtual page

number

Other f lags

Page 8: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 8 

Protection and Sharing in Virtual Memory

Fig. 20.5 Virtual memory as a facilitator of sharing and memory protection.

Page table for process 1

Main memory

Permission bits

Pointer Flags

Page table for process 2

To disk memory

Only read accesses allow ed

Read & w rite accesses allowed

Page 9: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 9 

The Latency Penalty of Virtual Memory

Page table

Main memory

Valid bits

Page table register

Virtual page

number

Other f lags

Virtual address

Memory access 1

Fig. 20.4

Physical address

Memory access 2

Page 10: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 10

20.3 Translation Lookaside Buffer

Fig. 20.6 Virtual-to-physical address translation by a TLB and how the resulting physical address is used to access the cache memory.

Virtual page number

Byte offset

Byte offset in word

Physical address tag

Cache index

Valid bits

TLB tags

Tags match and entry is valid

Physical page number Physical

address

Virtual address

Tra

nsla

tion

Other flags

Page 11: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 11

Example 20.2

 

Address Translation via TLB

An address translation process converts a 32-bit virtual address to a 32-bit physical address. Memory is byte-addressable with 4 KB pages. A 16-entry, direct-mapped TLB is used. Specify the components of the virtual and physical addresses and the width of the various TLB fields.

Solution Virtual page number

Byte offset

Byte offset in word

Physical address tag

Cache index

Valid bits

TLB tags

Tags match and entry is valid

Physical page number Physical

address

Virtual address

Tra

nsla

tion

Other flags

12

12

20

20

VirtualPage number

416TLB

index

Tag

TLB word width =16-bit tag +20-bit phys page # +1 valid bit +Other flags 37 bits

16-entryTLB

Page 12: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 12 

Virtual- or Physical-Address Cache?

Fig. 20.7 Options for where virtual-to-physical address translation occurs.

TLB Main memory Virtual-address cache

TLB Main memory Physical-address cache

TLB

Main memory Hybrid-address

cache

Page 13: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 13

20.4 Page Replacement Policies

Fig. 20.8 A scheme for the approximate implementation of LRU .

0

1

0

0

1

1

0

1

0

1

0

1

0

0

0

1

(a) Before replacement (b) After replacement

Least-recently used policy: effective, but hard to implement

Approximate versions of LRU are more easily implemented Clock policy: diagram below shows the reason for name Use bit is set to 1 whenever a page is accessed

Page 14: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 14

LRU Is Not Always the Best Policy

Example 20.2

Computing column averages for a 17 1024 table; 16-page memory

for j = [0 … 1023] { temp = 0; for i = [0 … 16] temp = temp + T[i][j] print(temp/17.0); }

Evaluate the page faults for row-major and column-major storage.

Solution

. . .

1024 61 60 60 60 60

17

Fig. 20.9 Pagination of a 171024 table with row- or column-major storage.

Page 15: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 15

20.5 Main and Mass Memories

Fig. 20.10 Variations in the size of a program’s working set.

Time, t

W(t, x)

Working set of a process, W(t, x): The set of pages accessed over the last x instructions at time t

Principle of locality ensures that the working set changes slowly

Page 16: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 16

20.6 Improving Virtual Memory Performance

Table 20.1 Memory hierarchy parameters and their effects on performance

Parameter variation Potential advantages Possible disadvantages

Larger main or cache size

Fewer capacity misses Longer access time

Longer pages or lines

Fewer compulsory misses (prefetching effect)

Greater miss penalty

Greater associativity (for cache only)

Fewer conflict misses Longer access time

More sophisticated replacement policy

Fewer conflict misses Longer decision time, more hardware

Write-through policy (for cache only)

No write-back time penalty, easier write-miss handling

Wasted memory bandwidth, longer access time

Page 17: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 17

Fig. 20.11 Trends in disk, main memory, and CPU speeds.  

Impact of Technology on Virtual Memory

1990 1980 2000 2010

Tim

e

Calendar year

Disk seek time

ps

ns

s

s

ms

CPU cycle time

DRAM access time

Page 18: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 18

Performance Impact of the Replacement Policy

Fig. 20.12 Dependence of page faults on the number of pages allocated and the page replacement policy

5 0 10 15

Pa

ge

fau

lt ra

te

Pages allocated

0.00

0.01

0.02

0.04

0.03

Ideal (best possible)

Approximate LRU

Least recently used

First in, first out

Page 19: Part V Memory System Design

July 2005 Computer Architecture, Memory System Design Slide 19

Fig. 20.2 Data movement in a memory hierarchy.

Summary of Memory Hierarchy

Pages Lines

Words

Registers

Main memory

Cache

Virtual memory

(transferred explicitly

via load/store) (transferred automatically

upon cache miss) (transferred automatically

upon page fault)

Cache memory: provides illusion of very high speed

Virtual memory: provides illusion of very large size

Main memory: reasonable cost, but slow & small