75
Cache Memories Topics Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance

Cache Memories

Embed Size (px)

DESCRIPTION

Cache Memories. Topics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance. Cache Memories. Cache memories are small, fast SRAM-based memories managed automatically in hardware. Hold frequently accessed blocks of main memory - PowerPoint PPT Presentation

Citation preview

Cache MemoriesCache Memories

TopicsTopics Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance

Cache MemoriesCache MemoriesCache memories are small, fast SRAM-based memories Cache memories are small, fast SRAM-based memories

managed automatically in hardware. managed automatically in hardware. Hold frequently accessed blocks of main memory

CPU looks first for data in L1, then in L2, then in main CPU looks first for data in L1, then in L2, then in main memory.memory.

Typical bus structure:Typical bus structure:

mainmemory

I/Obridge

bus interfaceL2 cache

ALU

register file

CPU chip

cache bus system bus memory bus

L1 cache

Inserting an L1 Cache Between the CPU and Main MemoryInserting an L1 Cache Between the CPU and Main Memory

a b c dblock 10

p q r sblock 21

...

...

w x y zblock 30

...

The big slow main memoryhas room for many 4-wordblocks.

The small fast L1 cache has roomfor two 4-word blocks.

The tiny, very fast CPU register filehas room for four 4-byte words.

The transfer unit betweenthe cache and main memory is a 4-word block(16 bytes).

The transfer unit betweenthe CPU register file and the cache is a 4-byte block.

line 0

line 1

General Org of a Cache MemoryGeneral Org of a Cache Memory

• • • B–110

• • • B–110

valid

valid

tag

tagset 0:

B = 2b bytesper cache block

E lines per set

S = 2s sets

t tag bitsper line

1 valid bitper line

Cache size: C = B x E x S data bytes

• • •

• • • B–110

• • • B–110

valid

valid

tag

tagset 1: • • •

• • • B–110

• • • B–110

valid

valid

tag

tagset S-1: • • •

• • •

Cache is an arrayof sets.

Each set containsone or more lines.

Each line holds ablock of data.

Addressing CachesAddressing Caches

t bits s bits b bits

0m-1

<tag> <set index> <block offset>

Address A:

• • • B–110

• • • B–110

v

v

tag

tagset 0: • • •

• • • B–110

• • • B–110

v

v

tag

tagset 1: • • •

• • • B–110

• • • B–110

v

v

tag

tagset S-1: • • •

• • •

The word at address A is in the cache ifthe tag bits in one of the <valid> lines in set <set index> match <tag>.

The word contents begin at offset <block offset> bytes from the beginning of the block.

Direct-Mapped CacheDirect-Mapped Cache

Simplest kind of cacheSimplest kind of cache

Characterized by exactly one line per set.Characterized by exactly one line per set.

valid

valid

valid

tag

tag

tag

• • •

set 0:

set 1:

set S-1:

E=1 lines per setcache block

cache block

cache block

Accessing Direct-Mapped CachesAccessing Direct-Mapped Caches

Set selectionSet selection Use the set index bits to determine the set of interest.

valid

valid

valid

tag

tag

tag

• • •

set 0:

set 1:

set S-1:t bits s bits

0 0 0 0 10m-1

b bits

tag set index block offset

selected set

cache block

cache block

cache block

Accessing Direct-Mapped CachesAccessing Direct-Mapped Caches

Line matching and word selectionLine matching and word selection Line matching: Find a valid line in the selected set with a

matching tag Word selection: Then extract the word

1

t bits s bits100i0110

0m-1

b bits

tag set index block offset

selected set (i):

(3) If (1) and (2), then cache hit,

and block offset selects

starting byte.

=1? (1) The valid bit must be set

= ?(2) The tag bits in the cache

line must match thetag bits in the address

0110 w3w0 w1 w2

30 1 2 74 5 6

Direct-Mapped Cache SimulationDirect-Mapped Cache SimulationM=16 byte addresses, B=2 bytes/block, S=4 sets, E=1 entry/set

Address trace (reads):0 [00002], 1 [00012], 13 [11012], 8 [10002], 0 [00002]

xt=1 s=2 b=1

xx x

1 0 m[1] m[0]

v tag data0 [00002] (miss)

(1)

1 0 m[1] m[0]

v tag data

1 1 m[13] m[12]

13 [11012] (miss)

(3)

1 1 m[9] m[8]

v tag data8 [10002] (miss)

(4)

1 0 m[1] m[0]

v tag data

1 1 m[13] m[12]

0 [00002] (miss)

(5)

0 M[0-1]1

1 M[12-13]1

1 M[8-9]1

1 M[12-13]1

0 M[0-1]1

1 M[12-13]1

0 M[0-1]1

Why Use Middle Bits as Index?Why Use Middle Bits as Index?

High-Order Bit IndexingHigh-Order Bit Indexing Adjacent memory lines would

map to same cache entry Poor use of spatial locality

Middle-Order Bit IndexingMiddle-Order Bit Indexing Consecutive memory lines map

to different cache lines Can hold C-byte region of

address space in cache at one time

4-line Cache High-OrderBit Indexing

Middle-OrderBit Indexing

00011011

0000000100100011010001010110011110001001101010111100110111101111

0000000100100011010001010110011110001001101010111100110111101111

Set Associative CachesSet Associative Caches

Characterized by more than one line per setCharacterized by more than one line per set

valid tagset 0: E=2 lines per set

set 1:

set S-1:

• • •

cache block

valid tag cache block

valid tag cache block

valid tag cache block

valid tag cache block

valid tag cache block

Accessing Set Associative CachesAccessing Set Associative Caches

Set selectionSet selection identical to direct-mapped cache

valid

valid

tag

tagset 0:

valid

valid

tag

tagset 1:

valid

valid

tag

tagset S-1:

• • •

t bits s bits0 0 0 0 1

0m-1

b bits

tag set index block offset

Selected set

cache block

cache block

cache block

cache block

cache block

cache block

Accessing Set Associative CachesAccessing Set Associative Caches

Line matching and word selectionLine matching and word selection must compare the tag in each valid line in the selected set.

1 0110 w3w0 w1 w2

1 1001

t bits s bits100i0110

0m-1

b bits

tag set index block offset

selected set (i):

=1? (1) The valid bit must be set.

= ?(2) The tag bits in one of the cache lines must

match the tag bits inthe address

(3) If (1) and (2), then cache hit, and

block offset selects starting byte.

30 1 2 74 5 6

Multi-Level CachesMulti-Level Caches

Options: separate Options: separate datadata and and instruction cachesinstruction caches, or a , or a unified cacheunified cache

size:speed:$/Mbyte:line size:

200 B3 ns

8 B

8-64 KB3 ns

32 B

128 MB DRAM60 ns$1.50/MB8 KB

30 GB8 ms$0.05/MB

larger, slower, cheaper

MemoryMemory

L1 d-cache

RegsUnified

L2 Cache

UnifiedL2

Cache

Processor

1-4MB SRAM6 ns$100/MB32 B

L1 i-cache

diskdisk

Processor ChipProcessor Chip

Intel Pentium Cache HierarchyIntel Pentium Cache Hierarchy

L1 Data1 cycle latency

16 KB4-way assoc

Write-through32B lines

L1 Instruction16 KB, 4-way

32B lines

Regs. L2 Unified128KB--2 MB4-way assocWrite-back

Write allocate32B lines

L2 Unified128KB--2 MB4-way assocWrite-back

Write allocate32B lines

MainMemory

Up to 4GB

MainMemory

Up to 4GB

Cache Performance MetricsCache Performance Metrics

Miss RateMiss Rate Fraction of memory references not found in cache

(misses/references) Typical numbers:

3-10% for L1can be quite small (e.g., < 1%) for L2, depending on size, etc.

Hit TimeHit Time Time to deliver a line in the cache to the processor (includes

time to determine whether the line is in the cache) Typical numbers:

1 clock cycle for L13-8 clock cycles for L2

Miss PenaltyMiss Penalty Additional time required because of a miss

Typically 25-100 cycles for main memory

Writing Cache Friendly CodeWriting Cache Friendly Code

Repeated references to variables are good (temporal Repeated references to variables are good (temporal locality)locality)

Stride-1 reference patterns are good (spatial locality)Stride-1 reference patterns are good (spatial locality)

Examples:Examples: cold cache, 4-byte words, 4-word cache blocks

int sumarrayrows(int a[M][N]){ int i, j, sum = 0;

for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum;}

int sumarraycols(int a[M][N]){ int i, j, sum = 0;

for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum;}

Miss rate = Miss rate = 1/4 = 25% 100%

The Memory MountainThe Memory Mountain

Read throughput (read bandwidth)Read throughput (read bandwidth) Number of bytes read from memory per second (MB/s)

Memory mountainMemory mountain Measured read throughput as a function of spatial and

temporal locality. Compact way to characterize memory system performance.

Memory Mountain Test FunctionMemory Mountain Test Function

/* The test function */void test(int elems, int stride) { int i, result = 0; volatile int sink;

for (i = 0; i < elems; i += stride)result += data[i];

sink = result; /* So compiler doesn't optimize away the loop */}

/* Run test(elems, stride) and return read throughput (MB/s) */double run(int size, int stride, double Mhz){ double cycles; int elems = size / sizeof(int);

test(elems, stride); /* warm up the cache */ cycles = fcyc2(test, elems, stride, 0); /* call test(elems,stride) */ return (size / stride) / (cycles / Mhz); /* convert cycles to MB/s */}

Memory Mountain Main RoutineMemory Mountain Main Routine/* mountain.c - Generate the memory mountain. */#define MINBYTES (1 << 10) /* Working set size ranges from 1 KB */#define MAXBYTES (1 << 23) /* ... up to 8 MB */#define MAXSTRIDE 16 /* Strides range from 1 to 16 */#define MAXELEMS MAXBYTES/sizeof(int)

int data[MAXELEMS]; /* The array we'll be traversing */

int main(){ int size; /* Working set size (in bytes) */ int stride; /* Stride (in array elements) */ double Mhz; /* Clock frequency */

init_data(data, MAXELEMS); /* Initialize each element in data to 1 */ Mhz = mhz(0); /* Estimate the clock frequency */ for (size = MAXBYTES; size >= MINBYTES; size >>= 1) {

for (stride = 1; stride <= MAXSTRIDE; stride++) printf("%.1f\t", run(size, stride, Mhz));printf("\n");

} exit(0);}

The Memory MountainThe Memory Mountain

s1

s3

s5

s7

s9

s11

s13

s15

8m

2m 512k 12

8k 32k 8k

2k

0

200

400

600

800

1000

1200

read

th

rou

gh

pu

t (M

B/s

)

stride (words) working set size (bytes)

Pentium III Xeon550 MHz16 KB on-chip L1 d-cache16 KB on-chip L1 i-cache512 KB off-chip unifiedL2 cache

Ridges ofTemporalLocality

L1

L2

mem

Slopes ofSpatialLocality

xe

Ridges of Temporal LocalityRidges of Temporal Locality

Slice through the memory mountain with stride=1Slice through the memory mountain with stride=1 illuminates read throughputs of different caches and

memory

0

200

400

600

800

1000

1200

8m

4m

2m

10

24

k

51

2k

25

6k

12

8k

64

k

32

k

16

k

8k

4k

2k

1k

working set size (bytes)

rea

d t

hro

ug

pu

t (M

B/s

)

L1 cacheregion

L2 cacheregion

main memoryregion

A Slope of Spatial LocalityA Slope of Spatial Locality

Slice through memory mountain with size=256KBSlice through memory mountain with size=256KB shows cache block size.

0

100

200

300

400

500

600

700

800

s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 s11 s12 s13 s14 s15 s16

stride (words)

rea

d t

hro

ug

hp

ut

(MB

/s)

one access per cache line

Matrix Multiplication ExampleMatrix Multiplication Example

Major Cache Effects to ConsiderMajor Cache Effects to Consider Total cache size

Exploit temporal locality and keep the working set small (e.g., by using blocking)

Block size Exploit spatial locality

Description:Description: Multiply N x N matrices O(N3) total operations Accesses

N reads per source element N values summed per destination

» but may be able to hold in register

/* ijk */for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; }}

/* ijk */for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; }}

Variable sumheld in register

Miss Rate Analysis for Matrix MultiplyMiss Rate Analysis for Matrix Multiply

Assume:Assume: Line size = 32B (big enough for 4 64-bit words) Matrix dimension (N) is very large

Approximate 1/N as 0.0

Cache is not even big enough to hold multiple rows

Analysis Method:Analysis Method: Look at access pattern of inner loop

CA

k

i

B

k

j

i

j

Layout of C Arrays in Memory (review)Layout of C Arrays in Memory (review)C arrays allocated in row-major orderC arrays allocated in row-major order

each row in contiguous memory locations

Stepping through columns in one row:Stepping through columns in one row: for (i = 0; i < N; i++)

sum += a[0][i]; accesses successive elements if block size (B) > 4 bytes, exploit spatial locality

compulsory miss rate = 4 bytes / B

Stepping through rows in one column:Stepping through rows in one column: for (i = 0; i < n; i++)

sum += a[i][0]; accesses distant elements no spatial locality!

compulsory miss rate = 1 (i.e. 100%)

Matrix Multiplication (ijk)Matrix Multiplication (ijk)

/* ijk */for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; }}

/* ijk */for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; }}

A B C

(i,*)

(*,j)(i,j)

Inner loop:

Column-wise

Row-wise Fixed

Misses per Inner Loop Iteration:Misses per Inner Loop Iteration:A B C

0.25 1.0 0.0

Matrix Multiplication (jik)Matrix Multiplication (jik)

/* jik */for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum }}

/* jik */for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum }}

A B C

(i,*)

(*,j)(i,j)

Inner loop:

Row-wise Column-wise

Fixed

Misses per Inner Loop Iteration:Misses per Inner Loop Iteration:A B C

0.25 1.0 0.0

Matrix Multiplication (kij)Matrix Multiplication (kij)

/* kij */for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; }}

/* kij */for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; }}

A B C

(i,*)(i,k) (k,*)

Inner loop:

Row-wise Row-wiseFixed

Misses per Inner Loop Iteration:Misses per Inner Loop Iteration:A B C

0.0 0.25 0.25

Matrix Multiplication (ikj)Matrix Multiplication (ikj)

/* ikj */for (i=0; i<n; i++) { for (k=0; k<n; k++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; }}

/* ikj */for (i=0; i<n; i++) { for (k=0; k<n; k++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; }}

A B C

(i,*)(i,k) (k,*)

Inner loop:

Row-wise Row-wiseFixed

Misses per Inner Loop Iteration:Misses per Inner Loop Iteration:A B C

0.0 0.25 0.25

Matrix Multiplication (jki)Matrix Multiplication (jki)

/* jki */for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; }}

/* jki */for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; }}

A B C

(*,j)(k,j)

Inner loop:

(*,k)

Column -wise

Column-wise

Fixed

Misses per Inner Loop Iteration:Misses per Inner Loop Iteration:A B C

1.0 0.0 1.0

Matrix Multiplication (kji)Matrix Multiplication (kji)

/* kji */for (k=0; k<n; k++) { for (j=0; j<n; j++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; }}

/* kji */for (k=0; k<n; k++) { for (j=0; j<n; j++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; }}

A B C

(*,j)(k,j)

Inner loop:

(*,k)

FixedColumn-wise

Column-wise

Misses per Inner Loop Iteration:Misses per Inner Loop Iteration:A B C

1.0 0.0 1.0

Summary of Matrix MultiplicationSummary of Matrix Multiplication

for (i=0; i<n; i++) {

for (j=0; j<n; j++) {

sum = 0.0;

for (k=0; k<n; k++)

sum += a[i][k] * b[k][j];

c[i][j] = sum;

}

}

ijk (& jik): • 2 loads, 0 stores• misses/iter = 1.25

for (k=0; k<n; k++) {

for (i=0; i<n; i++) {

r = a[i][k];

for (j=0; j<n; j++)

c[i][j] += r * b[k][j];

}

}

for (j=0; j<n; j++) {

for (k=0; k<n; k++) {

r = b[k][j];

for (i=0; i<n; i++)

c[i][j] += a[i][k] * r;

}

}

kij (& ikj): • 2 loads, 1 store• misses/iter = 0.5

jki (& kji): • 2 loads, 1 store• misses/iter = 2.0

Pentium Matrix Multiply PerformancePentium Matrix Multiply Performance

Miss rates are helpful but not perfect predictors.Miss rates are helpful but not perfect predictors.Code scheduling matters, too.

0

10

20

30

40

50

60

25 50 75 100 125 150 175 200 225 250 275 300 325 350 375 400

Array size (n)

Cyc

les

/ite

rati

on

kjijkikijikjjikijk

Improving Temporal Locality by BlockingImproving Temporal Locality by BlockingExample: Blocked matrix multiplicationExample: Blocked matrix multiplication

“block” (in this context) does not mean “cache block”. Instead, it mean a sub-block within the matrix. Example: N = 8; sub-block size = 4

C11 = A11B11 + A12B21 C12 = A11B12 + A12B22

C21 = A21B11 + A22B21 C22 = A21B12 + A22B22

A11 A12

A21 A22

B11 B12

B21 B22

X = C11 C12

C21 C22

Key idea: Sub-blocks (i.e., Axy) can be treated just like scalars.

Blocked Matrix Multiply (bijk)Blocked Matrix Multiply (bijk)

for (jj=0; jj<n; jj+=bsize) { for (i=0; i<n; i++) for (j=jj; j < min(jj+bsize,n); j++) c[i][j] = 0.0; for (kk=0; kk<n; kk+=bsize) { for (i=0; i<n; i++) { for (j=jj; j < min(jj+bsize,n); j++) { sum = 0.0 for (k=kk; k < min(kk+bsize,n); k++) { sum += a[i][k] * b[k][j]; } c[i][j] += sum; } } }}

Blocked Matrix Multiply AnalysisBlocked Matrix Multiply Analysis

Innermost loop pair multiplies a 1 X bsize sliver of A by a bsize X bsize block of B and accumulates into 1 X bsize sliver of C

Loop over i steps through n row slivers of A & C, using same B

A B C

block reused n times in succession

row sliver accessedbsize times

Update successiveelements of sliver

i ikk

kk jjjj

for (i=0; i<n; i++) { for (j=jj; j < min(jj+bsize,n); j++) { sum = 0.0 for (k=kk; k < min(kk+bsize,n); k++) { sum += a[i][k] * b[k][j]; } c[i][j] += sum; }

InnermostLoop Pair

Pentium Blocked Matrix Multiply PerformancePentium Blocked Matrix Multiply PerformanceBlocking (bijk and bikj) improves performance by a Blocking (bijk and bikj) improves performance by a

factor of two over unblocked versions (ijk and jik)factor of two over unblocked versions (ijk and jik) relatively insensitive to array size.

0

10

20

30

40

50

60

Array size (n)

Cy

cle

s/it

era

tio

n

kji

jki

kij

ikj

jik

ijk

bijk (bsize = 25)

bikj (bsize = 25)

Concluding ObservationsConcluding Observations

Programmer can optimize for cache performanceProgrammer can optimize for cache performance How data structures are organized How data are accessed

Nested loop structureBlocking is a general technique

All systems favor “cache friendly code”All systems favor “cache friendly code” Getting absolute optimum performance is very platform

specificCache sizes, line sizes, associativities, etc.

Can get most of the advantage with generic codeKeep working set reasonably small (temporal locality)Use small strides (spatial locality)

Direct MappingDirect Mapping

Each block of main memory maps to only one cache Each block of main memory maps to only one cache lineline i.e. if a block is in cache, it must be in one specific place

Address is in two partsAddress is in two parts

Least Significant w bits identify unique wordLeast Significant w bits identify unique word

Most Significant s bits specify one memory blockMost Significant s bits specify one memory block

The MSBs are split into a cache line field r and a tag of The MSBs are split into a cache line field r and a tag of s-r (most significant)s-r (most significant)

Each block has 16 words (4 bit word address)

Entire memory is 4096 x 16 = 64K words divided into 4096 blocks

Cache has only 128 blocks The mapping is fixed; each cache block maps to 32 different

The mapping is fixed; each cache block maps to 32 different MM blocks (128 x 32 = 4096)

    cache block 0 can only contain MM block 0, 128, 256, ...     cache block 1 can only contain MM block 1, 129, 257, ...

Main memory address can be thought to be composed of tag, cache block number, and word address fields. That is, the top 5 bits of MM address determine which MM blocks among the ones that can be mapped to the given cache block.

 

tag = nth (5) cache block number (7) word (4)

 

  e.g. MM = 00011 0001001 0011

access cache block 9

cache block 9 can have: MM 8, 137, 265, 393, ...

tag number x=3 refers to the xth. (3rd) block available to cache block 9 which is 393 (remember indexing starts from 0)

word address 7

therefore, 00011 0001001 0011 means 7th word of MM 393th block

How do we know if the memory block we want is in the cache or not ? How do we know if the memory block we want is in the cache or not ?

Keep a table somewhere (in the memory management Keep a table somewhere (in the memory management unit by some fast memory device) unit by some fast memory device)

Table has the following structure: 128 entries Table has the following structure: 128 entries (according to the number of blocks in the cache) and (according to the number of blocks in the cache) and each entry records the tag number to record which each entry records the tag number to record which MM block is currently mapped to each cache block  MM block is currently mapped to each cache block     

 

cache block number

current tag (0 - 31)

explanation

0 1 0, 128, 256, ...

1 3 1, 129, 257, 385, ...

... ... ...

126 3131st among MM blocks that can be mapped onto this cache blck 

127 2525th among MM blocks that can be mapped on this cache block

Cache Table

• So when a memory address is given, we look up the table with the cache block number and see if the tag is equal to what is in the table.  If it is equal, that means, the wanted memory block is available in the cache, and we can easy compute the cache address.

• Otherwise, we have to access the main memory and replace the content of the cache block with the main memory block we want and update the table.

• Here, there is no notion of replacement because there is only one candidate for elimination.  

Cache Table

Mapping FunctionMapping Function

Cache of 64kByteCache of 64kByte

Cache block of 4 bytesCache block of 4 bytes i.e. cache is 16k (214) lines of 4 bytes

16MBytes main memory16MBytes main memory

24 bit address 24 bit address (224=16M)

Direct MappingAddress StructureDirect MappingAddress Structure

Tag s-r Line or Slot r Word w

8 14 2

24 bit address24 bit address

2 bit word identifier (4 byte block)2 bit word identifier (4 byte block)

22 bit block identifier22 bit block identifier 8 bit tag (=22-14) 14 bit slot or line

No two blocks in the same line have the same Tag fieldNo two blocks in the same line have the same Tag field

Check contents of cache by finding line and checking TagCheck contents of cache by finding line and checking Tag

Direct Mapping Cache Line TableDirect Mapping Cache Line TableCache lineCache line Main Memory blocks heldMain Memory blocks held

00 0, m, 2m, 3m…20, m, 2m, 3m…2ss-m-m

11 1,m+1, 2m+1…21,m+1, 2m+1…2ss-m+1-m+1

m-1m-1 m-1, 2m-1,3m-1…2m-1, 2m-1,3m-1…2ss-1-1

Direct Mapping Cache OrganizationDirect Mapping Cache Organization

Direct Mapping pros & consDirect Mapping pros & cons

SimpleSimple

InexpensiveInexpensive

Fixed location for given blockFixed location for given block If a program accesses 2 blocks that map to the same line

repeatedly, cache misses are very high

Associative MappingAssociative Mapping

A main memory block can load into any line of cacheA main memory block can load into any line of cache

Memory address is interpreted as tag and wordMemory address is interpreted as tag and word

Tag uniquely identifies block of memoryTag uniquely identifies block of memory

Every line’s tag is examined for a matchEvery line’s tag is examined for a match

Cache searching gets expensiveCache searching gets expensive

Cache tableCache table 

cache block  MM block number (0 - 4095) explanation

0 4000 4000th MM block

1 0 0th MM block

...

126 2345 2345th MM block

127 566 566th MM block

AddressAddress

The main memory address is composed of only the tag number The main memory address is composed of only the tag number and word address.  Here the tag number basically refers to the and word address.  Here the tag number basically refers to the MM block number. MM block number.

When a MM address is given, we must search the whole table to When a MM address is given, we must search the whole table to see if the given tag (MM block number) is present in the table.  If see if the given tag (MM block number) is present in the table.  If not present, there is a cache miss and we not present, there is a cache miss and we replacereplace some module some module with the one we want.  If there is a hit, we easily compute the with the one we want.  If there is a hit, we easily compute the cache address and use it ... cache address and use it ...

Associative memory deviceAssociative memory device

Searching the whole table is slow processSearching the whole table is slow process

Ordniary memory works by returning a data upon a given address.  Ordniary memory works by returning a data upon a given address.  Associative memory works by, given a data, returning an address that Associative memory works by, given a data, returning an address that data is contained in (inverse).  data is contained in (inverse). 

Associative memory compares all the data contained in the memory by Associative memory compares all the data contained in the memory by large set of comparators in parallel and returns the address that the data large set of comparators in parallel and returns the address that the data is contained in.  We assume that data stored in the memory are unique is contained in.  We assume that data stored in the memory are unique which would be the case for cache tables.  which would be the case for cache tables. 

Associative memory is very expensive but obviously will be faster in Associative memory is very expensive but obviously will be faster in figuring out whether a given MM block number is present in the cache figuring out whether a given MM block number is present in the cache table. table.

So instead of searching the whole table, we simply supply the MM block So instead of searching the whole table, we simply supply the MM block number and if an address is returned, we know that the MM block number and if an address is returned, we know that the MM block resides in the cache, otherwise if some invalid value is returned, we resides in the cache, otherwise if some invalid value is returned, we know there is a cache miss and we must do the replacement. know there is a cache miss and we must do the replacement.    

Fully Associative Cache OrganizationFully Associative Cache Organization

Set Associative MappingSet Associative Mapping

Cache is divided into a number of setsCache is divided into a number of sets

Each set contains a number of linesEach set contains a number of lines

A given block maps to any line in a given setA given block maps to any line in a given set e.g. Block B can be in any line of set i

e.g. 2 lines per sete.g. 2 lines per set 2 way associative mapping A given block can be in one of 2 lines in only one set

Each set of the cache has a direct mapping (e.g. cache set 0 is mapped to main memory block 0, 64, 128, ...) In above example, 2 main memory blocks for one set can exist in the cache, and we can use the associative memory to select main memory blocks within a given set Given a main memory address, the high order bits determine the tag number, middle order bits determine the set number and the low order bits determine the offset within a given block ...

Address

Two Way Set Associative Cache OrganizationTwo Way Set Associative Cache Organization

Cache Coherence Problem:  The problem of updating the main memory on writing to the cache ... (there are several ways to deal with this like write through, copy back, etc)

Cache Coherence ProblemCache Coherence Problem

The problem of updating the main memory on The problem of updating the main memory on writing to the cache ... (there are several ways writing to the cache ... (there are several ways to deal with this like write through, copy to deal with this like write through, copy back, etc)back, etc)

Replacement Algorithms (1)Direct mappingReplacement Algorithms (1)Direct mappingNo choiceNo choice

Each block only maps to one lineEach block only maps to one line

Replace that lineReplace that line

Replacement Algorithms (2)Associative & Set AssociativeReplacement Algorithms (2)Associative & Set Associative

Hardware implemented algorithm (speed)Hardware implemented algorithm (speed)

Least Recently used (LRU)Least Recently used (LRU)

e.g. in 2 way set associativee.g. in 2 way set associative Which of the 2 block is lru?

First in first out (FIFO)First in first out (FIFO) replace block that has been in cache longest

Least frequently usedLeast frequently used replace block which has had fewest hits

RandomRandom

Write PolicyWrite Policy

Must not overwrite a cache block unless main memory Must not overwrite a cache block unless main memory is up to dateis up to date

Multiple CPUs may have individual cachesMultiple CPUs may have individual caches

I/O may address main memory directlyI/O may address main memory directly

Write throughWrite through

All writes go to main memory as well as cacheAll writes go to main memory as well as cache

Multiple CPUs can monitor main memory traffic to keep Multiple CPUs can monitor main memory traffic to keep local (to CPU) cache up to datelocal (to CPU) cache up to date

Lots of trafficLots of traffic

Slows down writesSlows down writes

Remember bogus write through caches!Remember bogus write through caches!

Write backWrite back

Updates initially made in cache onlyUpdates initially made in cache only

Update bit for cache slot is set when update occursUpdate bit for cache slot is set when update occurs

If block is to be replaced, write to main memory only if If block is to be replaced, write to main memory only if update bit is setupdate bit is set

Other caches get out of syncOther caches get out of sync

I/O must access main memory through cacheI/O must access main memory through cache

N.B. 15% of memory references are writesN.B. 15% of memory references are writes

ReplacementReplacement

When there are more than one cache block that can be chosen to be When there are more than one cache block that can be chosen to be replaced for a newly accessed memory block, we must choose a replaced for a newly accessed memory block, we must choose a block in a way that will minimize the overall overhead in block in a way that will minimize the overall overhead in accessing the memory.  That is, depending on what kinds of accessing the memory.  That is, depending on what kinds of strategies we use, we may increase/decrease the traffic between strategies we use, we may increase/decrease the traffic between the cache and the main memory. the cache and the main memory.

One of the most popular strategy is the LRU (least recently used) One of the most popular strategy is the LRU (least recently used) strategy.  This strategy selects the page (among the valid strategy.  This strategy selects the page (among the valid candidates) that has been accessed least recently (i.e. a page that candidates) that has been accessed least recently (i.e. a page that has not been accessed for long time).  The underlying has not been accessed for long time).  The underlying assumption is that the pages that were accessed recently will be assumption is that the pages that were accessed recently will be used again, so keep them. used again, so keep them.

LRULRU

Associate a counter with each cache block that keeps a record of how long ago the Associate a counter with each cache block that keeps a record of how long ago the cache block has been accessed. The algorithm (although most probably will cache block has been accessed. The algorithm (although most probably will be implemented in hardware) looks like:be implemented in hardware) looks like:

if cache hit, if cache hit, then set counter of the memory block to zero and increase all then set counter of the memory block to zero and increase all other block's counter by 1 whose value was less than the  original value of the other block's counter by 1 whose value was less than the  original value of the block that was just hit block that was just hit

if cache miss (and cache is full), if cache miss (and cache is full), then cache block with max counter value is removed, and the new memory then cache block with max counter value is removed, and the new memory block's counter value is set to one, all other block's counter are increase by 1 block's counter value is set to one, all other block's counter are increase by 1

if cache miss (and cache is not full), if cache miss (and cache is not full), then the new memory block's counter value is set to zero, all other block's then the new memory block's counter value is set to zero, all other block's counter are increase by 1counter are increase by 1

cache block number counter

0 2 / 3 / 4

1 3 / 0 / 1

2 4 / 4 / 0

3 1 / 2 / 3

4 0 / 1 / 2

1. access cache block 1

2. cache miss, replace what is in cache block #2

FIFOFIFO

if cache hit, then do nothing if cache hit, then do nothing

if cache miss, remove the cache block with maximum if cache miss, remove the cache block with maximum counter value, set the new block's counter to zero counter value, set the new block's counter to zero increase all other block's counter by 1 increase all other block's counter by 1        

Miss PenaltyMiss Penalty

Overhead or penalty caused by cache miss (memory Overhead or penalty caused by cache miss (memory block transfer, table update, interrupted program block transfer, table update, interrupted program execution (program stalls by cache miss and may execution (program stalls by cache miss and may have to sit waiting for memory management service)have to sit waiting for memory management service)

Transferring a block of data (8 words) on a cache miss ...Transferring a block of data (8 words) on a cache miss ...

Using normal memory system:Using normal memory system:

main memory access needs 8 cycles per word cache needs 1 cycle per word

total cycles needed = 8 * 8 + 8 = 72 cycles

Using memory interleaving (4 modules):Using memory interleaving (4 modules):

after first address arrives (1 cycle), data from all modules available after 8 seconds ...

cache update: 1 cycle per word

total cycles needed = 1 + 8 + 8 ~ 17 cycles

Primary cache hit rate = Secondary cache hit rate (h1 = h2)Cache Block = 8 words

for instructions: hit rate is 0.95, for data: hit rate is 0.9

(Note that it is NOT such that instruction is in the primary and data is in the secondary cache ...  each cache just has the same average hit rate regardless of its content ... when the content happens to be an instruction, there is 0.95 probability that it will be in the cache and so on)

Primary cache access needs 1 cycle = C1Secondary cache access needs 10 cycles = C2

Main memory access needs: (See above)

(1) M = 17 cycles (when memory interleaving used)(2) M = 45 cycles (when memory interleaving not used)

t = h1*C1 + (1-h1)*h2*C2 + (1-h1)*(1-h2)*M