135
CPU-DRAM Gap 1980: no cache in µproc; 1995 2-level cache on chip (1989 first Intel µproc with a cache on chip) Question: Who Cares About the Memory Hierarchy? µProc 60%/ yr. DRAM 7%/ yr. 1 10 100 1000 1980 1981 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 DRAM CPU 1982 Processor-Memory Performance Gap: (grows 50% / year) Performance “Moore’s Law” “Less’ Law?”

Question: Who Cares About the Memory Hierarchy?

  • Upload
    gaye

  • View
    28

  • Download
    0

Embed Size (px)

DESCRIPTION

µProc 60%/yr. 1000. CPU. “Moore’s Law”. 100. Processor-Memory Performance Gap: (grows 50% / year). Performance. 10. “Less’ Law?”. DRAM 7%/yr. DRAM. 1. 1980. 1981. 1982. 1983. 1984. 1985. 1986. 1987. 1988. 1989. 1990. 1991. 1992. 1993. 1994. 1995. 1996. 1997. 1998. - PowerPoint PPT Presentation

Citation preview

Page 1: Question: Who Cares About the Memory Hierarchy?

CPU-DRAM Gap

• 1980: no cache in µproc; 1995 2-level cache on chip(1989 first Intel µproc with a cache on chip)

Question: Who Cares About the Memory Hierarchy?

µProc60%/yr.

DRAM7%/yr.

1

10

100

1000198

0198

1 198

3198

4198

5 198

6198

7198

8198

9199

0199

1 199

2199

3199

4199

5199

6199

7199

8 199

9200

0

DRAM

CPU198

2

Processor-MemoryPerformance Gap:(grows 50% / year)

Per

form

ance

“Moore’s Law”

“Less’ Law?”

Page 2: Question: Who Cares About the Memory Hierarchy?

Generations of Microprocessors

• Time of a full cache miss in instructions executed:

1st Alpha: 340 ns/5.0 ns =  68 clks x 2 or 136

2nd Alpha: 266 ns/3.3 ns =  80 clks x 4 or 320

3rd Alpha: 180 ns/1.7 ns =108 clks x 6 or 648

Page 3: Question: Who Cares About the Memory Hierarchy?

Caching

• Principle: results of operations that are expensive should be kept around for reuse

• Examples:– CPU caching– Forwarding table caching– File caching– Web caching– Query caching– Computation caching

• Most processor performance improvements in the lastr

Page 4: Question: Who Cares About the Memory Hierarchy?

What is a cache?• Small, fast storage used to improve average

access time to slow memory.• Exploits spacial and temporal locality• In computer architecture, almost everything is a

cache!– Registers a cache on variables– First-level cache a cache on second-level cache– Second-level cache a cache on memory– Memory a cache on disk (virtual memory)– TLB a cache on page table– Branch-prediction a cache on prediction information?Proc/Regs

L1-Cache

L2-Cache

Memory

Disk, Tape, etc.

Bigger Faster

Page 5: Question: Who Cares About the Memory Hierarchy?

Example: 1 KB Direct Mapped Cache• For a 2 ** N byte cache:

– The uppermost (32 - N) bits are always the Cache Tag– The lowest M bits are the Byte Select (Block Size = 2 **

M)

Cache Index

0

1

2

3

:

Cache Data

Byte 0

0431

:

Cache Tag Example: 0x50

Ex: 0x01

0x50

Stored as partof the cache “state”

Valid Bit

:

31

Byte 1Byte 31 :

Byte 32Byte 33Byte 63 :Byte 992Byte 1023 :

Cache Tag

Byte Select

Ex: 0x00

9Block address

Page 6: Question: Who Cares About the Memory Hierarchy?

Set Associative Cache• N-way set associative: N entries for each

Cache Index– N direct mapped caches operates in parallel

• Example: Two-way set associative cache– Cache Index selects a “set” from the cache– The two tags in the set are compared to the input in

parallel– Data is selected based on the tag result

Cache Data

Cache Block 0

Cache TagValid

:: :

Cache Data

Cache Block 0

Cache Tag Valid

: ::

Cache Index

Mux 01Sel1 Sel0

Cache Block

CompareAdr Tag

Compare

OR

Hit

Page 7: Question: Who Cares About the Memory Hierarchy?

Disadvantage of Set Associative Cache

• N-way Set Associative Cache versus Direct Mapped Cache:

– N comparators vs. 1– Extra MUX delay for the data– Data comes AFTER Hit/Miss decision and set selection

• In a direct mapped cache, Cache Block is available BEFORE Hit/Miss:

– Possible to assume a hit and continue. Recover later if miss.

Cache Data

Cache Block 0

Cache Tag Valid

: ::

Cache Data

Cache Block 0

Cache TagValid

:: :

Cache Index

Mux 01Sel1 Sel0

Cache Block

CompareAdr Tag

Compare

OR

Hit

Page 8: Question: Who Cares About the Memory Hierarchy?

Basic Units of Cache

• Cache Line/Set (index)• Cache Block (tag)• Cache Sector or Subblock (valid bit)• S: cache size, A: degree of

associativity, B: block size, N: # of cache lines,

I: # of index bits

S = B*A*N

N = 2I/B

Page 9: Question: Who Cares About the Memory Hierarchy?

• Miss-oriented Approach to Memory Access:

– CPIExecution includes ALU and Memory instructions

CycleTimeyMissPenaltMissRateInst

MemAccessExecution

CPIICCPUtime

CycleTimeyMissPenaltInst

MemMissesExecution

CPIICCPUtime

Cache performance

• Separating out Memory component entirely

– AMAT = Average Memory Access Time

– CPIALUOps does not include memory instructionsCycleTimeAMAT

Inst

MemAccessCPI

Inst

AluOpsICCPUtime

AluOps

yMissPenaltMissRateHitTimeAMAT DataDataData

InstInstInst

yMissPenaltMissRateHitTime

yMissPenaltMissRateHitTime

Page 10: Question: Who Cares About the Memory Hierarchy?

Impact on Performance•Suppose a processor executes at

– Clock Rate = 200 MHz (5 ns per cycle), Ideal (no misses) CPI = 1.1

– 50% arith/logic, 30% ld/st, 20% control

•Suppose that 10% of memory operations get 50 cycle miss penalty

•Suppose that 1% of instructions get same miss penalty

•CPI = ideal CPI + average stalls per instruction1.1(cycles/ins) +[ 0.30 (DataMops/ins)

x 0.10 (miss/DataMop) x 50 (cycle/miss)] +

[ 1 (InstMop/ins) x 0.01 (miss/InstMop) x 50

(cycle/miss)] = (1.1 + 1.5 + .5) cycle/ins = 3.1

•58% of the time the proc is stalled waiting for memory!

•AMAT=(1/1.3)x[1+0.01x50]+(0.3/1.3)x[1+0.1x50]=2.54

Page 11: Question: Who Cares About the Memory Hierarchy?

Example: Harvard Architecture• Unified vs Separate I&D (Harvard)

• Table on page 384:– 16KB I&D: Inst miss rate=0.64%, Data miss rate=6.47%– 32KB unified: Aggregate miss rate=1.99%

• Which is better (ignore L2 cache)?– Assume 33% data ops 75% accesses from instructions (1.0/1.33)– hit time=1, miss time=50– Note that data hit has 1 stall for unified cache (only one port)

AMATHarvard=75%x(1+0.64%x50)+25%x(1+6.47%x50) = 2.05

AMATUnified=75%x(1+1.99%x50)+25%x(1+1+1.99%x50)= 2.24

ProcI-Cache-1

Proc

UnifiedCache-1

UnifiedCache-2

D-Cache-1

Proc

UnifiedCache-2

Page 12: Question: Who Cares About the Memory Hierarchy?

Four Questions for Memory Hierarchy

Designers• Q1: Where can a block be placed in the upper

level? (Block placement)– Fully Associative, Set Associative, Direct Mapped

• Q2: How is a block found if it is in the upper level? (Block identification)

– Tag/Block

• Q3: Which block should be replaced on a miss? (Block replacement)

– Random, LRU

• Q4: What happens on a write? (Write strategy)

– Write Back or Write Through (with Write Buffer)

Page 13: Question: Who Cares About the Memory Hierarchy?

Improving Cache Performance

1. Reduce the miss rate,

2. Reduce the miss penalty, or

3. Reduce the time to hit in the cache.

Page 14: Question: Who Cares About the Memory Hierarchy?

Reducing Misses• Classifying Misses: 3 Cs

– Compulsory—The first access to a block is not in the cache, so the block must be brought into the cache. Also called cold start misses or first reference misses.(Misses in even an Infinite Cache)

– Capacity—If the cache cannot contain all the blocks needed during execution of a program, capacity misses will occur due to blocks being discarded and later retrieved.(Misses in Fully Associative Size X Cache)

– Conflict—If block-placement strategy is set associative or direct mapped, conflict misses (in addition to compulsory & capacity misses) will occur because a block can be discarded and later retrieved if too many blocks map to its set. Also called collision misses or interference misses.(Misses in N-way Associative, Size X Cache)

• More recent, 4th “C”:– Coherence - Misses caused by cache coherence.

Page 15: Question: Who Cares About the Memory Hierarchy?

Cache Size (KB)

Mis

s R

ate

per

Typ

e

0

0.02

0.04

0.06

0.08

0.1

0.12

0.141 2 4 8

16

32

64

12

8

1-way

2-way

4-way

8-way

Capacity

Compulsory

3Cs Absolute Miss Rate (SPEC92)

Conflict

Compulsory vanishinglysmall

Page 16: Question: Who Cares About the Memory Hierarchy?

Cache Size (KB)

Mis

s R

ate

per

Typ

e

0

0.02

0.04

0.06

0.08

0.1

0.12

0.141 2 4 8

16

32

64

12

8

1-way

2-way

4-way

8-way

Capacity

Compulsory

2:1 Cache Rule

Conflict

miss rate 1-way associative cache size X = miss rate 2-way associative cache size X/2

Page 17: Question: Who Cares About the Memory Hierarchy?

3Cs Relative Miss Rate

Cache Size (KB)

Mis

s R

ate

per

Typ

e

0%

20%

40%

60%

80%

100%1 2 4 8

16

32

64

12

8

1-way

2-way4-way

8-way

Capacity

Compulsory

Conflict

Flaws: for fixed block sizeGood: insight => invention

Page 18: Question: Who Cares About the Memory Hierarchy?

How Can Reduce Misses?• 3 Cs: Compulsory, Capacity, Conflict• In all cases, assume total cache size not

changed:• What happens if:

1) Change Block Size: Which of 3Cs is obviously affected?

2) Change Associativity: Which of 3Cs is obviously affected?

3) Change Compiler: Which of 3Cs is obviously affected?

Page 19: Question: Who Cares About the Memory Hierarchy?

Mapping Between Cache and Memory

000001010011100101110111

00000001001000110100010101100111

Page 20: Question: Who Cares About the Memory Hierarchy?

Locality

• Temporal Locality: things that get referenced recently tend to be referenced in the near future

• Spatial Locality: things that are close to those that are referenced recently tend to be referenced in the near future

Page 21: Question: Who Cares About the Memory Hierarchy?

Block Size (bytes)

Miss Rate

0%

5%

10%

15%

20%

25%

16

32

64

12

8

25

6

1K

4K

16K

64K

256K

1. Reduce Misses via Larger Block Size

Page 22: Question: Who Cares About the Memory Hierarchy?

2. Reduce Misses via Higher Associativity

•2:1 Cache Rule: – Miss Rate DM cache size N Miss Rate 2-way

cache size N/2

•Beware: Execution time is only final measure!

– Will Clock Cycle time increase?– Hill [1988] suggested hit time for 2-way vs. 1-

way external cache +10%, internal + 2%

Page 23: Question: Who Cares About the Memory Hierarchy?

Example: Avg. Memory Access Time vs. Miss Rate

• Example: assume CCT = 1.10 for 2-way, 1.12 for 4-way, 1.14 for 8-way vs. CCT direct mapped

Cache Size Associativity (KB) 1-way 2-way 4-way 8-way

1 2.33 2.15 2.07 2.01 2 1.98 1.86 1.76 1.68 4 1.72 1.67 1.61 1.53 8 1.46 1.48 1.47 1.43 16 1.29 1.32 1.32 1.32 32 1.20 1.24 1.25 1.27 64 1.14 1.20 1.21 1.23 128 1.10 1.17 1.18 1.20

(Red means A.M.A.T. not improved by more associativity)

Page 24: Question: Who Cares About the Memory Hierarchy?

3. Reducing Misses via a“Victim Cache”

• How to combine fast hit time of direct mapped yet still avoid conflict misses?

• Add buffer to place data discarded from cache

• Jouppi [1990]: 4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache

• Used in Alpha, HP machines

To Next Lower Level InHierarchy

DATATAGS

One Cache line of DataTag and Comparator

One Cache line of DataTag and Comparator

One Cache line of DataTag and Comparator

One Cache line of DataTag and Comparator

Page 25: Question: Who Cares About the Memory Hierarchy?

4. Reducing Misses via “Pseudo-Associativity”

• How to combine fast hit time of Direct Mapped and have the lower conflict misses of 2-way SA cache?

• Divide cache: on a miss, check other half of cache to see if there, if so have a pseudo-hit (slow hit)

• Drawback: CPU pipeline is hard if hit takes 1 or 2 cycles– Used in MIPS R1000 L2 cache, similar in UltraSPARC– Better for caches not tied directly to processor (L2)

Hit Time

Pseudo Hit Time Miss Penalty

Time

Page 26: Question: Who Cares About the Memory Hierarchy?

Skewed Associative Cache

• Different hash functions for different banks

• Concurrent version of Pseudo-associative cache

• Why does it work?

Page 27: Question: Who Cares About the Memory Hierarchy?

5. Reducing Misses by Hardware Prefetching of

Instructions & Datals • E.g., Instruction Prefetching

– Alpha 21064 fetches 2 blocks on a miss– Extra block placed in “stream buffer”– On miss check stream buffer

• Works with data blocks too:– Jouppi [1990] 1 data stream buffer got 25% misses

from 4KB cache; 4 streams got 43%– Palacharla & Kessler [1994] for scientific programs

for 8 streams got 50% to 70% of misses from 2 64KB, 4-way set associative caches

• Prefetching relies on having extra memory bandwidth that can be used without penalty

Page 28: Question: Who Cares About the Memory Hierarchy?

6. Reducing Misses by Software Prefetching Data

• Data Prefetch– Load data into register (HP PA-RISC loads)– Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC

v. 9)– Special prefetching instructions cannot cause faults; a form

of speculative execution

• Prefetching comes in two flavors:– Binding prefetch: Requests load directly into register.

» Must be correct address and register!– Non-Binding prefetch: Load into cache.

» Can be incorrect. Frees HW/SW to guess!

• Issuing Prefetch Instructions takes time– Is cost of prefetch issues < savings in reduced misses?– Higher superscalar reduces difficulty of issue bandwidth

Page 29: Question: Who Cares About the Memory Hierarchy?

7. Reducing Misses by Compiler Optimizations

• McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache, 4 byte blocks in software

• Instructions– Reorder procedures in memory so as to reduce conflict misses– Profiling to look at conflicts(using tools they developed)

• Data– Merging Arrays: improve spatial locality by single array of

compound elements vs. 2 arrays– Loop Interchange: change nesting of loops to access data in order

stored in memory– Loop Fusion: Combine 2 independent loops that have same looping

and some variables overlap– Blocking: Improve temporal locality by accessing “blocks” of data

repeatedly vs. going down whole columns or rows

Page 30: Question: Who Cares About the Memory Hierarchy?

Merging Arrays Example

/* Before: 2 sequential arrays */int val[SIZE];int key[SIZE];

/* After: 1 array of stuctures */struct merge {

int val;int key;

};struct merge merged_array[SIZE];

Reducing conflicts between val & key; improve spatial locality

Page 31: Question: Who Cares About the Memory Hierarchy?

Loop Interchange Example

/* Before */for (k = 0; k < 100; k = k+1)

for (j = 0; j < 100; j = j+1)for (i = 0; i < 5000; i = i+1)

x[i][j] = 2 * x[i][j];/* After */for (k = 0; k < 100; k = k+1)

for (i = 0; i < 5000; i = i+1)for (j = 0; j < 100; j = j+1)

x[i][j] = 2 * x[i][j];

Sequential accesses instead of striding through memory every 100 words; improved spatial locality

Page 32: Question: Who Cares About the Memory Hierarchy?

Loop Fusion Example/* Before */for (i = 0; i < N; i = i+1)

for (j = 0; j < N; j = j+1)a[i][j] = 1/b[i][j] * c[i][j];

for (i = 0; i < N; i = i+1)for (j = 0; j < N; j = j+1)

d[i][j] = a[i][j] + c[i][j];/* After */for (i = 0; i < N; i = i+1)

for (j = 0; j < N; j = j+1){ a[i][j] = 1/b[i][j] * c[i][j];

d[i][j] = a[i][j] + c[i][j];}

2 misses per access to a & c vs. one miss per access; improve spatial locality

Page 33: Question: Who Cares About the Memory Hierarchy?

Blocking Example/* Before */

for (i = 0; i < N; i = i+1)

for (j = 0; j < N; j = j+1)

{r = 0;

for (k = 0; k < N; k = k+1){

r = r + y[i][k]*z[k][j];};

x[i][j] = r;

};

• Two Inner Loops:– Read all NxN elements of z[]– Read N elements of 1 row of y[] repeatedly– Write N elements of 1 row of x[]

• Capacity Misses a function of N & Cache Size:

– 2N3 + N2 => (assuming no conflict; otherwise …)

• Idea: compute on BxB submatrix that fits

Page 34: Question: Who Cares About the Memory Hierarchy?

Blocking Example/* After */for (jj = 0; jj < N; jj = jj+B)for (kk = 0; kk < N; kk = kk+B)for (i = 0; i < N; i = i+1)

for (j = jj; j < min(jj+B-1,N); j = j+1){r = 0; for (k = kk; k < min(kk+B-1,N); k = k+1) {

r = r + y[i][k]*z[k][j];}; x[i][j] = x[i][j] + r;};

• B called Blocking Factor• Capacity Misses from 2N3 + N2 to 2N3/B+N2

• Conflict Misses Too?

Page 35: Question: Who Cares About the Memory Hierarchy?

1 2 3 45 6 7 89 10 11 1213 14 15 16

1 2 3 45 6 7 89 10 11 1213 14 15 16

Y Z

Y1*Z1, Y5*Z1, Y9*Z1, Y13*Z1, Y2*Z5, Y6*Z5, Y10*Z5, Y14*Z5,Y3*Z9, Y7*Z9, Y11*Z9, Y15*Z9, Y4*Z13, Y8*Z13, Y12*Z13, Y16*Z13

Y1*Z2, Y5*Z2, Y9*Z2, Y13*Z2, Y2*Z6, Y6*Z6, Y10*Z6, Y14*Z6,Y3*Z10, Y7*Z10, Y11*Z10, Y15*Z10, Y4*Z14, Y8*Z14, Y12*Z14, Y16*Z14

Page 36: Question: Who Cares About the Memory Hierarchy?

Reducing Conflict Misses by Blocking

• Conflict misses in caches not FA vs. Blocking size– Lam et al [1991] a blocking factor of 24 had a fifth the misses

vs. 48 despite both fit in cache

Blocking Factor

Mis

s R

ate

0

0.05

0.1

0 50 100 150

Fully Associative Cache

Direct Mapped Cache

Page 37: Question: Who Cares About the Memory Hierarchy?

Performance Improvement

1 1.5 2 2.5 3

compress

cholesky(nasa7)

spice

mxm (nasa7)

btrix (nasa7)

tomcatv

gmty (nasa7)

vpenta (nasa7)

mergedarrays

loopinterchange

loop fusion blocking

Summary of Compiler Optimizations to Reduce Cache

Misses (by hand)

Page 38: Question: Who Cares About the Memory Hierarchy?

Summary: Miss Rate Reduction

• 3 Cs: Compulsory, Capacity, Conflict1. Reduce Misses via Larger Block Size2. Reduce Misses via Higher Associativity3. Reducing Misses via Victim Cache4. Reducing Misses via Pseudo-Associativity5. Reducing Misses by HW Prefetching Instr, Data6. Reducing Misses by SW Prefetching Data7. Reducing Misses by Compiler Optimizations

• Prefetching comes in two flavors:– Binding prefetch: Requests load directly into

register.» Must be correct address and register!

– Non-Binding prefetch: Load into cache. » Can be incorrect. Frees HW/SW to guess!

CPUtimeIC CPIExecution

Memory accesses

InstructionMiss rateMiss penalty

Clock cycle time

Page 39: Question: Who Cares About the Memory Hierarchy?

Improving Cache Performance

1. Reduce the miss rate,

2. Reduce the miss penalty, or

3. Reduce the time to hit in the cache.

Page 40: Question: Who Cares About the Memory Hierarchy?

Write Policy:Write-Through vs Write-

Back• Write-through: all writes update cache and underlying

memory/cache– Can always discard cached data - most up-to-date data is in memory– Cache control bit: only a valid bit

• Write-back: all writes simply update cache– Can’t just discard cached data - may have to write it back to memory– Cache control bits: both valid and dirty bits

• Other Advantages:– Write-through:

» memory (or other processors) always have latest data» Simpler management of cache

– Write-back:» much lower bandwidth, since data often overwritten multiple

times» Better tolerance to long-latency memory?

Page 41: Question: Who Cares About the Memory Hierarchy?

WT vs. WB

• Write burst• Error tolerance• Speculative write: DB and WT

Page 42: Question: Who Cares About the Memory Hierarchy?

What happens on a Cache miss?• For in-order pipeline, 2 options:

– Freeze pipeline in Mem stage (popular early on: Sparc, R4000)

IF ID EX Mem stall stall stall … stall Mem Wr IF ID EX stall stall stall … stall stall Ex

Wr

– Use Full/Empty bits in registers + MSHR queue» MSHR = “Miss Status/Handler Registers” (Kroft)

Each entry in this queue keeps track of status of outstanding memory requests to one complete memory line.

• Per cache-line: keep info about memory address.• For each word: register (if any) that is waiting for result.• Used to “merge” multiple requests to one memory line

» New load creates MSHR entry and sets destination register to “Empty”. Load is “released” from pipeline.

» Attempt to use register before result returns causes instruction to block in decode stage.

» Limited “out-of-order” execution with respect to loads. Popular with in-order superscalar architectures.

• Out-of-order pipelines already have this functionality built in… (load queues, etc).

Page 43: Question: Who Cares About the Memory Hierarchy?

Write Policy 2:Write Allocate vs Non-Allocate(What happens on write-miss)

• Write allocate: allocate new cache line in cache– Usually means that you have to do a “read

miss” to fill in rest of the cache-line!– Alternative: per/word valid bits

• Write non-allocate (or “write-around”):– Simply send write data through to underlying

memory/cache - don’t allocate new cache line!

Page 44: Question: Who Cares About the Memory Hierarchy?

Write Miss Policy

• Allocate and fetch: normal• Allocate but no fetch• No allocate: write around/bypassing• Cacheability

Page 45: Question: Who Cares About the Memory Hierarchy?

Review: Improving Cache Performance

1. Reduce the miss rate,

2. Reduce the time to hit in the cache.

3. Reduce the miss penalty

yMissPenaltMissRateHitTimeAMAT

Page 46: Question: Who Cares About the Memory Hierarchy?

1. Fast Hit times via Small and Simple

Caches• Why Alpha 21164 has 8KB Instruction

and 8KB data cache + 96KB second level cache?

– Small data cache and clock rate

• Direct Mapped, on chip

Page 47: Question: Who Cares About the Memory Hierarchy?

2. Fast hits by Avoiding Address Translation

• Send virtual address to cache? Called Virtually Addressed Cache or just Virtual Cache vs. Physical Cache

– Every time process is switched logically must flush the cache; otherwise get false hits

» Cost is time to flush + “compulsory” misses from empty cache– Dealing with aliases (sometimes called synonyms);

Two different virtual addresses map to same physical address– I/O must interact with cache, so need virtual address

• Solution to aliases– HW guaranteess covers index field & direct mapped, they must be

unique;called page coloring

• Solution to cache flush– Add process identifier tag that identifies process as well as address

within process: can’t get a hit if wrong process

Page 48: Question: Who Cares About the Memory Hierarchy?

Virtual Memory Hardware

• Translation Lookaside Buffer (TLB): cache for page table entries

• Typically full associative• Block size?• TLB refill: HW or SW• Variable page size• Process ID in TLB tags

Page 49: Question: Who Cares About the Memory Hierarchy?

Virtually Addressed Caches

CPU

TB

$

MEM

VA

PA

PA

ConventionalOrganization

CPU

$

TB

MEM

VA

VA

PA

Virtually Addressed CacheTranslate only on miss

Synonym Problem

CPU

$ TB

MEM

VA

PATags

PA

Overlap $ accesswith VA translation:requires $ index to

remain invariantacross translation

VATags

L2 $

Page 50: Question: Who Cares About the Memory Hierarchy?

Virtually Indexed and Tagged

• Homonym problem– A1 in P1 and A1 in P2 are mapped to different

PA– Process ID comes into rescue

• Synonym problem– A1 in P1 and A2 in P2 are mapped to same PA– Multiple copies of PA inconsistent– For direct-mapped cache, if the index part of A1

and A2 are the same, or if A1 and A2 are mapped to the same cache set, it is OK.

Page 51: Question: Who Cares About the Memory Hierarchy?

Virtually Indexed Physically Tagged

• If index is physical part of address, can start tag access in parallel with translation so that can compare to physical tag

• Limits cache to page size: what if want bigger caches and uses same trick?

– Higher associativity moves barrier to right– Page coloring

Page Address Page Offset

Address Tag Index Block Offset

01231 11

Page 52: Question: Who Cares About the Memory Hierarchy?

• Pipeline Tag Check and Update Cache as separate stages; current write tag check & previous write cache update

• Only STORES in the pipeline; empty during a miss

Store r2, (r1) Check r1Add --Sub --Store r4, (r3) M[r1]<-r2& check r3

• In shade is “Delayed Write Buffer”; must be checked on reads; either complete write or read from buffer

3. Fast Hit Times Via Pipelined Writes

Page 53: Question: Who Cares About the Memory Hierarchy?

4. Fast Writes on Misses Via Small Subblocks

• If most writes are 1 word, subblock size is 1 word, & write through then always write subblock & tag immediately

– Tag match and valid bit already set: Writing the block was proper, & nothing lost by setting valid bit on again.

– Tag match and valid bit not set: The tag match means that this is the proper block; writing the data into the subblock makes it appropriate to turn the valid bit on.

– Tag mismatch: This is a miss and will modify the data portion of the block. Since write-through cache, no harm was done; memory still has an up-to-date copy of the old value. Only the tag to the address of the write and the valid bits of the other subblock need be changed because the valid bit for this subblock has already been set

• Doesn’t work with write back due to last case

Page 54: Question: Who Cares About the Memory Hierarchy?

Review: Improving Cache Performance

1. Reduce the miss rate,

2. Reduce the time to hit in the cache.

3. Reduce the miss penalty

yMissPenaltMissRateHitTimeAMAT

Page 55: Question: Who Cares About the Memory Hierarchy?

0. Faster Memory

• This requires a bit of discussion. • Hold a bit until we discuss memory.

Page 56: Question: Who Cares About the Memory Hierarchy?

1. Reducing Miss Penalty: Read Priority over Write on

Miss• Write through with write buffers offer RAW

conflicts with main memory reads on cache misses

– If simply wait for write buffer to empty, might increase read miss penalty (old MIPS 1000 by 50% )

– Check write buffer contents before read; if no conflicts, let the memory access continue

• Alternative: Write Back– Read miss replacing dirty block– Normal: Write dirty block to memory, and then do the

read– Instead copy the dirty block to a write buffer, then do the

read, and then do the write– CPU stall less since restarts as soon as do read

Page 57: Question: Who Cares About the Memory Hierarchy?

• Write Buffer is needed between the Cache and Memory

– Processor: writes data into the cache and the write buffer– Memory controller: write contents of the buffer to memory

• Write buffer is just a FIFO:– Typical number of entries: 4– Works fine if:Store frequency (w.r.t. time) << 1 / DRAM write cycle– Must handle burst behavior as well!

ProcessorCache

Write Buffer

DRAM

1. Reducing Penalty: Read Priority over Write on Miss

Page 58: Question: Who Cares About the Memory Hierarchy?

• Write-Buffer Issues: Could introduce RAW Hazard with memory!

– Write buffer may contain only copy of valid data Reads to memory may get wrong result if we ignore write buffer

• Solutions:– Simply wait for write buffer to empty before servicing reads:

» Might increase read miss penalty (old MIPS 1000 by 50% )– Check write buffer contents before read (“fully associative”);

» If no conflicts, let the memory access continue» Else grab data from buffer

• Can Write Buffer help with Write Back?– Read miss replacing dirty block

» Copy dirty block to write buffer while starting read to memory

RAW Hazards from Write Buffer!

RAS/CAS

WriteDATA

RAS/CAS

ReadDATA

3 8 3 8

Processor + DRAM

RAS/CAS

ReadDATA

RAS/CAS

WriteDATA

8 3 83

WriteDATA

ReadDATA

8 8

DRAM

Proc

Page 59: Question: Who Cares About the Memory Hierarchy?

2. Reduce Miss Penalty: Subblock Placement

• Don’t have to load full block on a miss• Have valid bits per subblock to indicate

valid• (Originally invented to reduce tag

storage)

Valid Bits Subblocks

Page 60: Question: Who Cares About the Memory Hierarchy?

3. Reduce Miss Penalty: Early Restart and Critical

Word First• Don’t wait for full block to be loaded before

restarting CPU– Early restart—As soon as the requested word of the block

arrives, send it to the CPU and let the CPU continue execution

– Critical Word First—Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block. Also called wrapped fetch and requested word first

• Generally useful only in large blocks, • Spatial locality a problem; tend to want next

sequential word, so not clear if benefit by early restart

block

Page 61: Question: Who Cares About the Memory Hierarchy?

4. Reduce Miss Penalty: Non-blocking Caches to reduce

stalls on misses• Non-blocking cache or lockup-free cache allow

data cache to continue to supply cache hits during a miss

– requires F/E bits on registers or out-of-order execution– requires multi-bank memories

• “hit under miss” reduces the effective miss penalty by working during miss vs. ignoring CPU requests

• “hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses

– Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses

– Requires multiple memory banks (otherwise cannot support)– Penium Pro allows 4 outstanding memory misses

Page 62: Question: Who Cares About the Memory Hierarchy?

Value of Hit Under Miss for SPEC

• FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26• Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19• 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss

Hit Under i Misses

Av

g.

Me

m.

Acce

ss T

ime

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

eqnto

tt

esp

ress

o

xlisp

com

pre

ss

mdljsp

2

ear

fpppp

tom

catv

swm

256

doduc

su2co

r

wave5

mdljdp2

hydro

2d

alv

inn

nasa

7

spic

e2g6

ora

0->1

1->2

2->64

Base

Integer Floating Point

“Hit under n Misses”

0->11->22->64Base

Page 63: Question: Who Cares About the Memory Hierarchy?

5. Second level cache

• L2 EquationsAMAT = Hit TimeL1 + Miss RateL1 x Miss PenaltyL1

Miss PenaltyL1 = Hit TimeL2 + Miss RateL2 x Miss PenaltyL2

AMAT = Hit TimeL1 +

Miss RateL1 x (Hit TimeL2 + Miss RateL2 + Miss PenaltyL2)

• Definitions:– Local miss rate— misses in this cache divided by the total

number of memory accesses to this cache (Miss rateL2)– Global miss rate—misses in this cache divided by the total

number of memory accesses generated by the CPU (Miss RateL1 x Miss RateL2)

– Global Miss Rate is what matters

Page 64: Question: Who Cares About the Memory Hierarchy?

Comparing Local and Global Miss Rates

• 32 KByte 1st level cache;Increasing 2nd level cache

• Global miss rate close to single level cache rate provided L2 >> L1

• Don’t use local miss rate• L2 not tied to CPU clock

cycle!• Cost & A.M.A.T.• Generally Fast Hit Times

and fewer misses• Since hits are few, target

miss reduction

Linear

Log

Cache Size

Cache Size

Page 65: Question: Who Cares About the Memory Hierarchy?

Reducing Misses: Which apply to L2 Cache?

• Reducing Miss Rate1. Reduce Misses via Larger Block Size2. Reduce Conflict Misses via Higher Associativity3. Reducing Conflict Misses via Victim Cache4. Reducing Conflict Misses via Pseudo-Associativity5. Reducing Misses by HW Prefetching Instr, Data6. Reducing Misses by SW Prefetching Data7. Reducing Capacity/Conf. Misses by Compiler

Optimizations

Page 66: Question: Who Cares About the Memory Hierarchy?

Relative CPU Time

Block Size

11.11.21.31.41.51.61.71.81.9

2

16 32 64 128 256 512

1.361.28 1.27

1.34

1.54

1.95

L2 cache block size & A.M.A.T.

• 32KB L1, 8 byte path to memory

Page 67: Question: Who Cares About the Memory Hierarchy?

Reducing Miss Penalty Summary

• Five techniques– Read priority over write on miss– Subblock placement– Early Restart and Critical Word First on miss– Non-blocking Caches (Hit under Miss, Miss under

Miss)– Second Level Cache

• Can be applied recursively to Multilevel Caches

– Danger is that time to DRAM will grow with multiple levels in between

– First attempts at L2 caches can make things worse, since increased worst case is worse

CPUtimeIC CPIExecution

Memory accesses

InstructionMiss rateMiss penalty

Clock cycle time

Page 68: Question: Who Cares About the Memory Hierarchy?

Cache Optimization Summary

Technique MR MP HT ComplexityLarger Block Size + – 0Higher Associativity + – 1Victim Caches + 2Pseudo-Associative Caches + 2HW Prefetching of Instr/Data + 2Compiler Controlled Prefetching + 3Compiler Reduce Misses + 0Priority to Read Misses + 1Subblock Placement + + 1Early Restart & Critical Word 1st + 2Non-Blocking Caches + 3Second Level Caches + 2Small & Simple Caches – + 0Avoiding Address Translation + 2Pipelining Writes + 1

mis

s ra

teh

it t

ime

mis

sp

enal

ty

Page 69: Question: Who Cares About the Memory Hierarchy?

What is the Impact of What You’ve Learned

About Caches?• 1960-1985: Speed

= ƒ(no. operations)• 1990

– Pipelined Execution & Fast Clock Rate

– Out-of-Order execution

– Superscalar Instruction Issue

• 1998: Speed = ƒ(non-cached memory accesses)

• What does this mean for– Compilers?,Operating Systems?, Algorithms?

Data Structures?

1

10

100

1000

1980

1981

1982

1983

1984

1985

1986

1987

1988

1989

1990

1991

1992

1993

1994

1995

1996

1997

1998

1999

2000

DRAM

CPU

Page 70: Question: Who Cares About the Memory Hierarchy?

Cache Parameter Estimation

• D: cache size, b: cache line size, a: degree of associativity

• A strided access to an N-element array, with stride being s, s=1, 2, 4, … N/2

• Each iteration contains a read and write of the same array element

• Four cases:– N <= D Tno-miss– N > D and 1<= s < b: Tno-miss + Ms/b– N > D and N/s > D/b and b <= s < N/a: Tno-miss +

M– N> D and N/a <= s <= N/2 : Tno-miss

Page 71: Question: Who Cares About the Memory Hierarchy?

Virtual Memory Hardware in X86 Architecture

PLTI

GDT/LDTTwo-LevelPage Table

Physical Address

Offset Segment Selector

Virtual Address

Base31:24

Limit19:16

Base23:16

Limit15:00

Base15:00

P DPL

Segment Descriptor Format Page Table Entry Format

Linear Address

PWUPage Frame Address

Page 72: Question: Who Cares About the Memory Hierarchy?

Segment Limit Check

X86 architecture’s virtual memory hardware supports both segmentation and paging

Virtual Address = Segment Selector + Offset

Linear Address

Physical Address

segmentation

paging

base + offset <= limit

Page 73: Question: Who Cares About the Memory Hierarchy?

Array Bound Checking

• Prevent unauthorized modification of the address space (e.g., return address or bank account) through buffer overflowing The cleanest solution

• Check each memory reference with respect to the upper/lower limits of its associated object1. Figure out which is the associated object 2. Perform the limit check (more time-consuming)

• Current software-based array bound checking methods: 2-30 times slowdown

Page 74: Question: Who Cares About the Memory Hierarchy?

The CASH Approach

• Goal: Exploiting segment limit check hardware to perform array bound checking for free

• Idea: Each array or buffer is treated as a separate segment and referenced accordingly

offset = &(B[M]) – B_Segment_Base;

for (i = M; i < N; I++) { GS = B_Segment_Selector;

B[i] = 5; for (i = M; i < N; i++) {

} GS:offset = 5;

offset += 4;

}

Page 75: Question: Who Cares About the Memory Hierarchy?

Array Access Code Generation

A[i] = 10

Without Array Bound Checkmovl -60(%ebp), %eax ; load i

leal 0(, %eax, 4), %edx ; i * 4

movl -56(%ebp), %eax ; load a

movl $10, (%edx, %eax) ; mem[a+4*i] = 10

Checking Array Bound using Cashmovl -60(%ebp), %eax ; load i

leal 0(, %eax, 4), %edx ; i * 4

movl -56(%ebp), %eax ; load a

movw -52(%ebp), %ecx ; load a's shadow structure ptr

movw 0(%ecx), %gs ; load GS

subl 4(%ecx), %eax ; compute offset from base

movl $10, %gs:(%edx,%eax) ; check bounds and Mem[a+4*i]=10

Page 76: Question: Who Cares About the Memory Hierarchy?

Intra-AS Protection

• A program and an untrusted component: OS and its device drivers, Apache and CGIs, Java program and C components, etc.

• Run kernel at SPL0 (0-4GB), extensible application at SPL2 (0-3GB) and extension at SPL3 (0-3GB)

• Exposed pages of extensible application at PPL 1

• Design Issues– Control transfer– Data sharing– Libraries

Page 77: Question: Who Cares About the Memory Hierarchy?

PaX

• Non-executable stack and heap• Invariant: VM areas that can be

modified cannot be executed; VM areas that can be executed cannot be modified

• Partition the address space into two disjoint segments, one CS and one DS

• Updates happen in DS, and instruction fetch happen in CS

• Use randomization to address return-to-libc attacks

Page 78: Question: Who Cares About the Memory Hierarchy?

Main Memory Background• Performance of Main Memory:

– Latency: Cache Miss Penalty» Access Time: time between request and word arrives» Cycle Time: time between requests

– Bandwidth: I/O & Large Block Miss Penalty (L2)

• Main Memory is DRAM: Dynamic Random Access Memory– Dynamic since needs to be refreshed periodically (8 ms, 1% time)– Addresses divided into 2 halves (Memory as a 2D matrix):

» RAS or Row Access Strobe» CAS or Column Access Strobe

• Cache uses SRAM: Static Random Access Memory– No refresh (6 transistors/bit vs. 1 transistor

Size: DRAM/SRAM 4-8, Cost/Cycle time: SRAM/DRAM 8-16

Page 79: Question: Who Cares About the Memory Hierarchy?

Main Memory Deep Background

• “Out-of-Core”, “In-Core,” “Core Dump”?• “Core memory”?• Non-volatile, magnetic• Lost to 4 Kbit DRAM (today using 64Kbit

DRAM)• Access time 750 ns, cycle time 1500-3000 ns

Page 80: Question: Who Cares About the Memory Hierarchy?

Static vs. Dynamic Memory

6-Transistor SRAM Cell

bit bit

word(row select)

bit bit

word

• Write:1. Drive bit lines (bit=1, bit=0)2.. Select row

• Read:1. Precharge bit and bit to Vdd or Vdd/2 => make sure

equal!2.. Select row3. Cell pulls one line low4. Sense amp on column detects difference between bit

and bit

replaced with pullupto save area

10

0 1

Page 81: Question: Who Cares About the Memory Hierarchy?

1-Transistor Memory Cell (DRAM)

• Write:– 1. Drive bit line– 2.. Select row

• Read:– 1. Precharge bit line to Vdd/2– 2.. Select row– 3. Cell and bit line share charges

» Very small voltage changes on the bit line– 4. Sense (fancy sense amp)

» Can detect changes of ~1 million electrons– 5. Write: restore the value

• Refresh– 1. Just do a dummy read to every cell.

row select

bit

Page 82: Question: Who Cares About the Memory Hierarchy?

DRAM Capacitors: more capacitance in a small area

• Trench capacitors:– Logic ABOVE capacitor– Gain in surface area of

capacitor– Better Scaling properties– Better Planarization

• Stacked capacitors– Logic BELOW capacitor– Gain in surface area of

capacitor– 2-dim cross-section quite

small

Page 83: Question: Who Cares About the Memory Hierarchy?

Classical DRAM Organization (square)

row

decoder

rowaddress

Column Selector & I/O Circuits Column

Address

data

RAM Cell Array

word (row) select

bit (data) lines

• Row and Column Address together:

– Select 1 bit a time

Each intersection representsa 1-T DRAM Cell

Page 84: Question: Who Cares About the Memory Hierarchy?

AD

OE_L

256K x 8DRAM9 8

WE_LCAS_LRAS_L

OE_L

A Row Address

WE_L

Junk

Read AccessTime

Output EnableDelay

CAS_L

RAS_L

Col Address Row Address JunkCol Address

D High Z Data Out

DRAM Read Cycle Time

Early Read Cycle: OE_L asserted before CAS_L Late Read Cycle: OE_L asserted after CAS_L

• Every DRAM access begins at:

– The assertion of the RAS_L

– 2 ways to read: early or late v. CAS

Junk Data Out High Z

DRAM Read Timing

Page 85: Question: Who Cares About the Memory Hierarchy?

4 Key DRAM Timing Parameters

•tRAC: minimum time from RAS line falling to the valid data output.

– Quoted as the speed of a DRAM when buy– A typical 4Mb DRAM tRAC = 60 ns– Speed of DRAM since on purchase sheet?

•tRC: minimum time from the start of one row access to the start of the next.

– tRC = 110 ns for a 4Mbit DRAM with a tRAC of 60 ns

•tCAC: minimum time from CAS line falling to valid data output.

– 15 ns for a 4Mbit DRAM with a tRAC of 60 ns

•tPC: minimum time from the start of one column access to the start of the next.

– 35 ns for a 4Mbit DRAM with a tRAC of 60 ns

Page 86: Question: Who Cares About the Memory Hierarchy?

• DRAM (Read/Write) Cycle Time >> DRAM (Read/Write) Access Time

– 2:1; why?

• DRAM (Read/Write) Cycle Time :– How frequent can you initiate an access?– Analogy: A little kid can only ask his father for money on

Saturday

• DRAM (Read/Write) Access Time:– How quickly will you get what you want once you initiate

an access?– Analogy: As soon as he asks, his father will give him the

money

• DRAM Bandwidth Limitation analogy:– What happens if he runs out of money on Wednesday?

TimeAccess Time

Cycle Time

Main Memory Performance

Page 87: Question: Who Cares About the Memory Hierarchy?

Access Pattern without Interleaving:

Start Access for D1

CPU Memory

Start Access for D2

D1 available

Access Pattern with 4-way Interleaving:

Acc

ess

Ban

k 0

Access Bank 1

Access Bank 2

Access Bank 3

We can Access Bank 0 again

CPU

MemoryBank 1

MemoryBank 0

MemoryBank 3

MemoryBank 2

Increasing Bandwidth - Interleaving

Page 88: Question: Who Cares About the Memory Hierarchy?

• Simple: – CPU, Cache, Bus,

Memory same width (32 bits)

• Interleaved: – CPU, Cache, Bus 1 word:

Memory N Modules(4 Modules); example is word interleaved

• Wide: – CPU/Mux 1 word;

Mux/Cache, Bus, Memory N words (Alpha: 64 bits & 256 bits)

Main Memory Performance

Page 89: Question: Who Cares About the Memory Hierarchy?

• Timing model– 1 to send address, – 4 for access time, 10 cycle time, 1 to send data– Cache Block is 4 words

• Simple M.P. = 4 x (1+10+1) = 48• Wide M.P. = 1 + 10 + 1 = 12• Interleaved M.P. = 1+10+1 + 3 =15

address

Bank 0

048

12

address

Bank 1

159

13

address

Bank 2

26

1014

address

Bank 3

37

1115

Main Memory Performance

Page 90: Question: Who Cares About the Memory Hierarchy?

Avoiding Bank Conflicts

• Lots of banksint x[256][512];

for (j = 0; j < 512; j = j+1)for (i = 0; i < 256; i = i+1)

x[i][j] = 2 * x[i][j];• Even with 128 banks, since 512 is multiple of 128,

conflict on word accesses• SW: loop interchange or declaring array not power of 2

(“array padding”)• HW: Prime number of banks

– bank number = address mod number of banks– address within bank = address / number of words in bank– modulo & divide per memory access with prime no. banks?– address within bank = address mod number words in bank– bank number? easy if 2N words per bank

Page 91: Question: Who Cares About the Memory Hierarchy?

• Chinese Remainder TheoremAs long as two sets of integers ai and bi follow these rules

and that ai and aj are co-prime if i j, then the integer x has only one solution (unambiguous mapping):

– bank number = b0, number of banks = a0 (= 3 in example)– address within bank = b1, number of words in bank = a1

(= 8 in example)– N word address 0 to N-1, prime no. banks, words power of 2

bi xmodai,0 bi ai, 0 x a0 a1a2

Fast Bank Number

Seq. Interleaved Modulo Interleaved

Bank Number: 0 1 2 0 1 2Address

within Bank: 0 0 1 2 0 16 81 3 4 5 9 1 172 6 7 8 18 10 23 9 10 11 3 19 114 12 13 14 12 4 205 15 16 17 21 13 56 18 19 20 6 22 147 21 22 23 15 7 23

Page 92: Question: Who Cares About the Memory Hierarchy?

Alternative:Incremental Computation

V = <Z, F, S, L, R>

Z + F*R = W1 * D + B1

S*R = Kw * D + Kb

W2 = (Z+ F*R + S*R) / D

= W1 + Kw + (B1+Kb) / D

B2 = (Z+ F*R + S*R) mod D

= (B1 + Kb) mod D

Page 93: Question: Who Cares About the Memory Hierarchy?

Independent Memory Banks

• Memory banks for independent accesses vs. faster sequential accesses

– Multiprocessor– I/O– CPU with Hit under n Misses, Non-blocking Cache

• Superbank: all memory active on one block transfer (or Bank)

• Bank: portion within a superbank that is word interleaved (or Subbank)

Superbank Bank

Superbank NumberSuperbank

OffsetBank Number Bank Offset

Page 94: Question: Who Cares About the Memory Hierarchy?

Independent Memory Banks

• How many banks?number banks number clocks to access word in bank

– For sequential accesses, otherwise will return to original bank before it has next word ready

– (like in vector case)

• Increasing DRAM => fewer chips => harder to have banks

Page 95: Question: Who Cares About the Memory Hierarchy?

Fast Memory Systems: DRAM specific• Multiple CAS accesses: several names (page mode)

– Extended Data Out (EDO): 30% faster in page mode

• New DRAMs to address gap; what will they cost, will they survive?

– RAMBUS: startup company; reinvent DRAM interface» Each Chip a module vs. slice of memory» Short bus between CPU and chips» Does own refresh» Variable amount of data returned» 1 byte / 2 ns (500 MB/s per chip)

– Synchronous DRAM: 2 banks on chip, a clock signal to DRAM, transfer synchronous to system clock (66 - 150 MHz)

– Intel claims RAMBUS Direct (16 b wide) is future PC memory

• Niche memory or main memory?– e.g., Video RAM for frame buffers, DRAM + fast serial output

Page 96: Question: Who Cares About the Memory Hierarchy?

Fast Page Mode Operation• Regular DRAM

Organization:– N rows x N column x M-bit– Read & Write M-bit at a time– Each M-bit access requires

a RAS / CAS cycle

• Fast Page Mode DRAM– N x M “SRAM” to save a row

• After a row is read into the register

– Only CAS is needed to access other M-bit blocks on that row

– RAS_L remains asserted while CAS_L is toggled

N r

ows

N cols

DRAM

ColumnAddress

M-bit OutputM bits

N x M “SRAM”

RowAddress

A Row Address

CAS_L

RAS_L

Col Address Col Address

1st M-bit Access

Col Address Col Address

2nd M-bit 3rd M-bit 4th M-bit

Page 97: Question: Who Cares About the Memory Hierarchy?

SDRAM timing

• Micron 128M-bit dram (using 2Meg16bit4bank ver)– Row (12 bits), bank (2 bits), column (9 bits)

RAS(New Bank)

CAS End RASx

BurstREADCAS Latency

Page 98: Question: Who Cares About the Memory Hierarchy?

DRAM History• DRAMs: capacity +60%/yr, cost –30%/yr

– 2.5X cells/area, 1.5X die size in 3 years

• ‘98 DRAM fab line costs $2B– DRAM only: density, leakage v. speed

• Rely on increasing no. of computers & memory per computer (60% market)

– SIMM or DIMM is replaceable unit => computers use any generation DRAM

• Commodity, second source industry => high volume, low profit, conservative

– Little organization innovation in 20 years

• Order of importance: 1) Cost/bit 2) Capacity– First RAMBUS: 10X BW, +30% cost => little impact

Page 99: Question: Who Cares About the Memory Hierarchy?

DRAM Future: 1 Gbit+ DRAM

Mitsubishi Samsung• Blocks 512 x 2 Mbit 1024 x 1

Mbit• Clock 200 MHz 250 MHz• Data Pins 64 16• Die Size 24 x 24 mm 31 x 21 mm

– Sizes will be much smaller in production

• Metal Layers 3 4• Technology 0.15 micron 0.16 micron

Page 100: Question: Who Cares About the Memory Hierarchy?

DRAMs per PC over TimeM

inim

um

Mem

ory

Siz

e

DRAM Generation‘86 ‘89 ‘92 ‘96 ‘99 ‘02 1 Mb 4 Mb 16 Mb 64 Mb 256 Mb 1 Gb

4 MB

8 MB

16 MB

32 MB

64 MB

128 MB

256 MB

32 8

16 4

8 2

4 1

8 2

4 1

8 2

Page 101: Question: Who Cares About the Memory Hierarchy?

Potential DRAM Crossroads?

• After 20 years of 4X every 3 years, running into wall? (64Mb - 1 Gb)

• How can keep $1B fab lines full if buy fewer DRAMs per computer?

• Cost/bit –30%/yr if stop 4X/3 yr?• What will happen to $40B/yr DRAM

industry?

Page 102: Question: Who Cares About the Memory Hierarchy?

Cache Cross Cutting Issues

• Superscalar CPU & Number Cache Ports must match: number memory accesses/cycle?

• Speculative Execution and non-faulting option on memory/TLB

• Parallel Execution vs. Cache locality– Want far separation to find independent

operations vs. want reuse of data accesses to avoid misses

• Cache consistency in I/O and MP => multiple copies of data

– Consistency

Page 103: Question: Who Cares About the Memory Hierarchy?

Alpha 21064

• Separate Instr & Data TLB & Caches

• TLBs fully associative• TLB updates in SW

(“Priv Arch Libr”)• Caches 8KB direct

mapped, write thru• Critical 8 bytes first• Prefetch instr.

stream buffer• 2 MB L2 cache, direct

mapped, WB (off-chip)

• 256 bit path to main memory, 4 x 64-bit modules

• Victim Buffer: to give read priority over write

• 4 entry write buffer between D$ & L2$

StreamBuffer

WriteBuffer

Victim Buffer

Instr Data

Page 104: Question: Who Cares About the Memory Hierarchy?

0.01%

0.10%

1.00%

10.00%

100.00%AlphaSort Li Compress Ear Tomcatv Spice

Mis

s R

ate I $

D $

L2

Alpha Memory Performance: Miss Rates

of SPEC92

8K

8K

2M

I$ miss = 2%D$ miss = 13%L2 miss = 0.6%

I$ miss = 1%D$ miss = 21%L2 miss = 0.3%

I$ miss = 6%D$ miss = 32%L2 miss = 10%

Page 105: Question: Who Cares About the Memory Hierarchy?

0.000.501.001.502.002.503.003.504.004.505.00

AlphaSort Espresso Sc Mdljsp2 Ear Alvinn Mdljp2

CP

I

L2

I$

D$

I Stall

Other

Alpha CPI Components• Instruction stall: branch mispredict (green);• Data cache (blue); Instruction cache (yellow); L2$ (pink)

Other: compute + reg conflicts, structural conflicts

Page 106: Question: Who Cares About the Memory Hierarchy?

Pitfall: Predicting Cache Performance from Different Prog.

(ISA, compiler, ...)

• 4KB Data cache miss rate 8%,12%, or 28%?

• 1KB Instr cache miss rate 0%,3%,or 10%?

• Alpha vs. MIPS for 8KB Data $:17% vs. 10%

• Why 2X Alpha v. MIPS?

0%

5%

10%

15%

20%

25%

30%

35%

1 2 4 8 16 32 64 128Cache Size (KB)

Miss Rate

D: tomcatv

D: gcc

D: espresso

I: gcc

I: espresso

I: tomcatv

D$, Tom

D$, gcc

D$, esp

I$, gcc

I$, esp

I$, Tom

Page 107: Question: Who Cares About the Memory Hierarchy?

Instructions Executed (billions)

Cummlative

AverageMemoryAccessTime

1

1.5

2

2.5

3

3.5

4

4.5

0 1 2 3 4 5 6 7 8 9 10 11 12

Pitfall: Simulating Too Small an Address Trace

I$ = 4 KB, B=16BD$ = 4 KB, B=16BL2 = 512 KB, B=128BMP = 12, 200

Page 108: Question: Who Cares About the Memory Hierarchy?
Page 109: Question: Who Cares About the Memory Hierarchy?
Page 110: Question: Who Cares About the Memory Hierarchy?
Page 111: Question: Who Cares About the Memory Hierarchy?
Page 112: Question: Who Cares About the Memory Hierarchy?
Page 113: Question: Who Cares About the Memory Hierarchy?
Page 114: Question: Who Cares About the Memory Hierarchy?
Page 115: Question: Who Cares About the Memory Hierarchy?
Page 116: Question: Who Cares About the Memory Hierarchy?
Page 117: Question: Who Cares About the Memory Hierarchy?
Page 118: Question: Who Cares About the Memory Hierarchy?
Page 119: Question: Who Cares About the Memory Hierarchy?
Page 120: Question: Who Cares About the Memory Hierarchy?
Page 121: Question: Who Cares About the Memory Hierarchy?
Page 122: Question: Who Cares About the Memory Hierarchy?
Page 123: Question: Who Cares About the Memory Hierarchy?
Page 124: Question: Who Cares About the Memory Hierarchy?

Another Level of Indirection

• Masking physical memory errors• Copy on write• Locality enhancement• L2 cache management

Page 125: Question: Who Cares About the Memory Hierarchy?
Page 126: Question: Who Cares About the Memory Hierarchy?
Page 127: Question: Who Cares About the Memory Hierarchy?
Page 128: Question: Who Cares About the Memory Hierarchy?
Page 129: Question: Who Cares About the Memory Hierarchy?
Page 130: Question: Who Cares About the Memory Hierarchy?

DRAM-Based Cache

• Slow and need refreshing• Slow -> Larger transistor• Refreshing no access means invalid• Similar technique used in

switch/router buffer

Page 131: Question: Who Cares About the Memory Hierarchy?

Memory Wall

• What if the CPU is so fast that we cannot even afford compulsory misses

• Intelligent RAM (IRAM) or Processor-in-Memory

• Fast data copying within DRAM• Memory-oriented architecture:

assuming processing logic is abundant

Page 132: Question: Who Cares About the Memory Hierarchy?

Need for Error Correction!• Motivation:

– Failures/time proportional to number of bits!– As DRAM cells shrink, more vulnerable

• Went through period in which failure rate was low enough without error correction that people didn’t do correction

– DRAM banks too large now– Servers always corrected memory systems

• Basic idea: add redundancy through parity bits– Simple but wastful version:

» Keep three copies of everything, vote to find right value» 200% overhead, so not good!

– Common configuration: Random error correction» SEC-DED (single error correct, double error detect)» One example: 64 data bits + 8 parity bits (11% overhead)

– Really want to handle failures of physical components as well» Organization is multiple DRAMs/SIMM, multiple SIMMs» Want to recover from failed DRAM and failed SIMM!» Requires more redundancy to do this» All major vendors thinking about this in high-end machines

Page 133: Question: Who Cares About the Memory Hierarchy?

• Tunneling Magnetic Junction RAM (TMJ-RAM):– Speed of SRAM, density of DRAM, non-

volatile (no refresh)– New field called “Spintronics”:

combination of quantum spin and electronics

– Same technology used in high-density disk-drives

• MEMs storage devices:– Large magnetic “sled” floating on top of

lots of little read/write heads– Micromechanical actuators move the sled

back and forth over the heads

More esoteric Storage Technologies?

Page 134: Question: Who Cares About the Memory Hierarchy?

• Tunneling Magnetic Junction RAM (TMJ-RAM)– Speed of SRAM, density of DRAM, non-volatile

(no refresh)– “Spintronics”: combination quantum spin and

electronics– Same technology used in high-density disk-drives

Something new: Structure of Tunneling Magnetic Junction

Page 135: Question: Who Cares About the Memory Hierarchy?

MEMS-based Storage• Magnetic “sled” floats

on array of read/write heads

– Approx 250 Gbit/in2

– Data rates:IBM: 250 MB/s w 1000 headsCMU: 3.1 MB/s w 400 heads

• Electrostatic actuators move media around to align it with heads

– Sweep sled ±50m in < 0.5s

• Capacity estimated to be in the 1-10GB in 10cm2

See Ganger et all: http://www.lcs.ece.cmu.edu/research/MEMS