82
CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic [email protected] http://

CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic [email protected] [email protected] milenka

Embed Size (px)

Citation preview

Page 1: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

CPE 631 Caches

Electrical and Computer EngineeringUniversity of Alabama in Huntsville

Aleksandar Milenkovic [email protected]

http://www.ece.uah.edu/~milenka

Page 2: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

2

AM

LaCASA

Outline

Memory Hierarchy Four Questions for Memory Hierarchy Cache Performance

1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

Page 3: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

3

AM

LaCASA

Processor-DRAM Latency Gap

1

10

100

1000

1980

1981

1982

1983

1984

1985

1986

1987

1988

1989

1990

1991

1992

1993

1994

1995

1996

1997

1998

1999

2000

DRAM

CPU

Processor:2x/1.5 year

Per

form

ance

Time

Memory:2x/10 years

Processor-Memory Performance Gap grows 50% / year

Page 4: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

4

AM

LaCASA

Solution: The Memory Hierarchy (MH)

Processor

Speed:Capacity:Cost/bit:

Fastest Slowest

Smallest Biggest

Highest Lowest

Upper LowerLevels in Memory Hierarchy

User sees as much memory as is available in cheapest technology and access it at the speed offered by the fastest technology

Control

Datapath

Page 5: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

5

AM

LaCASA

Generations of Microprocessors

Time of a full cache miss in instructions executed: 1st Alpha: 340 ns/5.0 ns =  68 clks x 2 or 136 2nd Alpha: 266 ns/3.3 ns =  80 clks x 4 or 320 3rd Alpha: 180 ns/1.7 ns =108 clks x 6 or 648 1/2X latency x 3X clock rate x 3X Instr/clock 5X

Page 6: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

6

AM

LaCASA

Why hierarchy works?

Principle of locality

Temporal locality: recently accessed items are likely to be accessed in the near future Keep them close to the processor

Spatial locality: items whose addresses are near one another tend to be referenced close together in time Move blocks consisted of contiguous words to the upper level

Rule of thumb:Programs spend 90% of their execution time in only 10% of code

Address space

Probability of reference

Page 7: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

7

AM

LaCASA

Cache Measures

Hit: data appears in some block in the upper level (Bl. X) Hit Rate: the fraction of memory access found in the upper level Hit Time: time to access the upper level

(RAM access time + Time to determine hit/miss) Miss: data needs to be retrieved from the lower level (Bl. Y)

Miss rate: 1 - (Hit Rate) Miss penalty: time to replace a block in the upper level +

time to retrieve the block from the lower level Average memory-access time

= Hit time + Miss rate x Miss penalty (ns or clocks)

Hit time << Miss Penalty

Upper Level Memory

Lower Level Memory

To Processor

From Processor

Bl. X Bl. Y

Page 8: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

8

AM

LaCASA

Levels of the Memory Hierarchy

CPU Registers100s Bytes<1s nsCache10s-100s K Bytes1-10 ns$10/ MByte

Main MemoryM Bytes100ns- 300ns$1/ MByte

Disk10s G Bytes, 10 ms (10,000,000 ns)$0.0031/ MByte

CapacityAccess TimeCost

Tapeinfinitesec-min$0.0014/ MByte

Registers

Cache

Memory

Disk

Tape

Instr. Operands

Blocks

Pages

Files

StagingXfer Unit

prog./compiler1-8 bytes

cache cntl8-128 bytes

OS512-4K bytes

user/operatorMbytes

Upper Level

Lower Level

faster

Larger

Page 9: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

9

AM

LaCASA

Four Questions for Memory Heir.

Q#1: Where can a block be placed in the upper level? Block placement direct-mapped, fully associative, set-associative

Q#2: How is a block found if it is in the upper level? Block identification

Q#3: Which block should be replaced on a miss? Block replacement Random, LRU (Least Recently Used)

Q#4: What happens on a write? Write strategy Write-through vs. write-back Write allocate vs. No-write allocate

Page 10: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

10

AM

LaCASA

Direct-Mapped Cache

In a direct-mapped cache, each memory address is associated with one possible block within the cache Therefore, we only need to look in a single

location in the cache for the data if it exists in the cache

Block is the unit of transfer between cache and memory

Page 11: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

11

AM

LaCASA

Q1: Where can a block be placed in the upper level?

Block 12 placed in 8 block cache: Fully associative, direct mapped,

2-way set associative S.A. Mapping = Block Number Modulo Number Sets

Cache

01234567 0011223301234567

Memory

111111111122222222223301234567890123456789012345678901

Full MappedDirect Mapped(12 mod 8) = 4

2-Way Assoc(12 mod 4) = 0

Page 12: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

12

AM

LaCASA

Direct-Mapped Cache (cont’d)Memory

Memory Address

0123456789ABCDEF

Cache (4 byte)Cache Index

0123

Page 13: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

13

AM

LaCASA

Direct-Mapped Cache (cont’d) Since multiple memory addresses map to same

cache index, how do we tell which one is in there? What if we have a block size > 1 byte? Result: divide memory address into three fields:

tttttttttttttttttt iiiiiiiiii oooo

TAG: to check if have the correct block

INDEX: to select block

OFFSET: to select byte within the block

Block Address

Page 14: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

14

AM

LaCASA

Direct-Mapped Cache Terminology

INDEX: specifies the cache index (which “row” of the cache we should look in)

OFFSET: once we have found correct block, specifies which byte within the block we want

TAG: the remaining bits after offset and index are determined; these are used to distinguish between all the memory addresses that map to the same location

BLOCK ADDRESS: TAG + INDEX

Page 15: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

15

AM

LaCASA

Direct-Mapped Cache Example Conditions

32-bit architecture (word=32bits), address unit is byte 8KB direct-mapped cache with 4 words blocks

Determine the size of the Tag, Index, and Offset fields

OFFSET (specifies correct byte within block): cache block contains 4 words = 16 (24) bytes 4 bits

INDEX (specifies correct row in the cache):cache size is 8KB=213 bytes, cache block is 24 bytes#Rows in cache (1 block = 1 row): 213/24 = 29 9 bits

TAG: Memory address length - offset - index =

32 - 4 - 9 = 19 tag is leftmost 19 bits

Page 16: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

16

AM

LaCASA

1 KB Direct Mapped Cache, 32B blocks

For a 2 ** N byte cache: The uppermost (32 - N) bits are always the Cache Tag The lowest M bits are the Byte Select (Block Size = 2 ** M)

Cache Index

0

1

2

3

:

Cache Data

Byte 0

0431

:

Cache Tag Example: 0x50

Ex: 0x01

0x50

Stored as partof the cache “state”

Valid Bit

:

31

Byte 1Byte 31 :

Byte 32Byte 33Byte 63 :Byte 992Byte 1023 :

Cache Tag

Byte Select

Ex: 0x00

9

Page 17: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

17

AM

LaCASA

Two-way Set Associative Cache N-way set associative: N entries for each Cache Index

N direct mapped caches operates in parallel (N typically 2 to 4) Example: Two-way set associative cache

Cache Index selects a “set” from the cache The two tags in the set are compared in parallel Data is selected based on the tag result

Cache Data

Cache Block 0

Cache TagValid

:: :

Cache Data

Cache Block 0

Cache Tag Valid

: ::

Cache Index

Mux 01Sel1 Sel0

Cache Block

CompareAdr Tag

Compare

OR

Hit

Page 18: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

18

AM

LaCASA

Disadvantage of Set Associative Cache

N-way Set Associative Cache v. Direct Mapped Cache: N comparators vs. 1 Extra MUX delay for the data Data comes AFTER Hit/Miss

In a direct mapped cache, Cache Block is available BEFORE Hit/Miss: Possible to assume a hit and continue. Recover later if miss.

Cache DataCache Block 0

Cache Tag Valid

: ::

Cache DataCache Block 0

Cache TagValid

:: :

Cache Index

Mux 01Sel1 Sel0

Cache Block

CompareAdr Tag

Compare

OR

Hit

Page 19: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

19

AM

LaCASA

Q2: How is a block found if it is in the upper level?

Tag on each block No need to check index or block offset

Increasing associativity shrinks index, expands tag

BlockOffset

Block Address

IndexTag

Page 20: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

20

AM

LaCASA

Q3: Which block should be replaced on a miss?

Easy for Direct Mapped Set Associative or Fully Associative:

Random LRU (Least Recently Used)

Assoc: 2-way 4-way 8-way Size LRU Ran LRU Ran LRU Ran 16 KB 5.2% 5.7% 4.7% 5.3% 4.4% 5.0% 64 KB 1.9% 2.0% 1.5% 1.7% 1.4% 1.5% 256 KB 1.15% 1.17% 1.13% 1.13% 1.12% 1.12%

Page 21: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

21

AM

LaCASA

Q4: What happens on a write?

Write through—The information is written to both the block in the cache and to the block in the lower-level memory.

Write back—The information is written only to the block in the cache. The modified cache block is written to main memory only when it is replaced. is block clean or dirty?

Pros and Cons of each? WT: read misses cannot result in writes WB: no repeated writes to same location

WT always combined with write buffers so that don’t wait for lower level memory

Page 22: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

22

AM

LaCASA

Write stall in write through caches

When the CPU must wait for writes to complete during write through, the CPU is said to write stall

Common optimization => Write buffer which allows the processor to continue as soon as the data is written to the buffer, thereby overlapping processor execution with memory updating

However, write stalls can occur even with write buffer (when buffer is full)

Page 23: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

23

AM

LaCASA

Write Buffer for Write Through

A Write Buffer is needed between the Cache and Memory Processor: writes data into the cache and the write buffer Memory controller: write contents of the buffer to memory

Write buffer is just a FIFO: Typical number of entries: 4 Works fine if: Store frequency (w.r.t. time) << 1 / DRAM write

cycle Memory system designer’s nightmare:

Store frequency (w.r.t. time) -> 1 / DRAM write cycle Write buffer saturation

ProcessorCache

Write Buffer

DRAM

Page 24: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

24

AM

LaCASA

What to do on a write-miss?

Write allocate (or fetch on write)The block is loaded on a write-miss, followed by the write-hit actions

No-write allocate (or write around)The block is modified in the memory and not loaded into the cache

Although either write-miss policy can be used with write through or write back, write back caches generally use write allocate and write through often use no-write allocate

Page 25: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

25

AM

LaCASA

An Example: The Alpha 21264 Data Cache (64KB, 64-byte blocks, 2w)

Tag Index<29> <9> <6>

Offset CPUAddressData in

Data out

Write buffer

Lower level memory

Valid<1> Tag<29> Data<512>

... ...

=? 8:1 Mux

<44> - physical address

... ...

=? 8:1 Mux

2:1

MUX

Page 26: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

26

AM

LaCASA

Cache Performance

Hit Time = time to find and retrieve data from current level cache

Miss Penalty = average time to retrieve data on a current level miss (includes the possibility of misses on successive levels of memory hierarchy)

Hit Rate = % of requests that are found in current level cache

Miss Rate = 1 - Hit Rate

Page 27: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

27

AM

LaCASA

Cache Performance (cont’d)

Average memory access time (AMAT)

)(%

)(%

DataDataData

InstinstInst

PenaltyMissRateMisstimeHitdata

PenaltyMissRateMisstimeHitnsinstructio

PenaltyMissRateMisstimeHitAMAT

Page 28: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

28

AM

LaCASA

An Example: Unified vs. Separate I&D

Compare 2 design alternatives (ignore L2 caches)? 16KB I&D: Inst misses=3.82 /1K, Data miss rate=40.9 /1K 32KB unified: Unified misses = 43.3 misses/1K

Assumptions: ld/st frequency is 36% 74% accesses from instructions

(1.0/1.36) hit time = 1clock cycle, miss penalty = 100 clock cycles Data hit has 1 stall for unified cache (only one port)

ProcI-Cache-L1

Proc

UnifiedCache-L1

UnifiedCache-L2

D-Cache-L1

Proc

UnifiedCache-L2

Page 29: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

29

AM

LaCASA

Unified vs. Separate I&D (cont’d)

Miss rate (L1I) = (# L1I misses) / (IC) #L1I misses = (L1I misses per 1k) * (IC /1000) Miss rate (L1I) = 3.82/1000 = 0.0038

Miss rate (L1D) = (# L1D misses) / (# Mem. Refs) #L1D misses = (L1D misses per 1k) * (IC /1000) Miss rate (L1D) = 40.9 * (IC/1000) / (0.36*IC) = 0.1136

Miss rate (L1U) = (# L1U misses) / (IC + Mem. Refs) #L1U misses = (L1U misses per 1k) * (IC /1000) Miss rate (L1U) = 43.3*(IC / 1000) / (1.36 * IC) = 0.0318

ProcI-Cache-L1

Proc

UnifiedCache-L1

UnifiedCache-L2

D-Cache-L1

Proc

UnifiedCache-L2

Page 30: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

30

AM

LaCASA

Unified vs. Separate I&D (cont’d)

AMAT (split) = (% instr.) * (hit time + L1I miss rate * Miss Pen.) +(% data) * (hit time + L1D miss rate * Miss Pen.) =.74(1 + .0038*100) + .26(1+.1136*100) = 4.2348 clock cycles

AMAT (unif.) = (% instr.) * (hit time + L1Umiss rate * Miss Pen.) +(% data) * (hit time + L1U miss rate * Miss Pen.) = .74(1 + .0318*100) + .26(1 + 1 + .0318*100) = 4.44 clock cycles

ProcI-Cache-L1

Proc

UnifiedCache-L1

UnifiedCache-L2

D-Cache-L1

Proc

UnifiedCache-L2

Page 31: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

31

AM

LaCASA

AMAT and Processor Performance

Miss-oriented Approach to Memory Access CPIExec includes ALU and Memory

instructions

rateClock

yMissPenaltMissRateInst

MemAccessCPIIC

timeCPUExec

rateClock

yMissPenaltInst

MemMissesCPIIC

timeCPUExec

Page 32: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

32

AM

LaCASA

AMAT and Processor Performance (cont’d)

Separating out Memory component entirely AMAT = Average Memory Access Time CPIALUOps does not include memory

instructions

rateClock

AMATInst

MemAccessCPI

InstALUops

ICtimeCPU

ALUops

)(%

)(%

DataDataData

InstinstInst

PenaltyMissRateMisstimeHitdata

PenaltyMissRateMisstimeHitnsinstructio

PenaltyMissRateMisstimeHitAMAT

Page 33: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

33

AM

LaCASA

Summary: Caches

The Principle of Locality: Program access a relatively small portion of the address space at any

instant of time. Temporal Locality: Locality in Time Spatial Locality: Locality in Space

Three Major Categories of Cache Misses: Compulsory Misses: sad facts of life. Example: cold start misses. Capacity Misses: increase cache size Conflict Misses: increase cache size and/or associativity

Write Policy: Write Through: needs a write buffer. Write Back: control can be complex

Today CPU time is a function of (ops, cache misses) vs. just f(ops): What does this mean to Compilers, Data structures, Algorithms?

Page 34: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

34

AM

LaCASA

Summary: The Cache Design Space

Several interacting dimensions cache size block size associativity replacement policy write-through vs write-back

The optimal choice is a compromise depends on access characteristics

workload use (I-cache, D-cache, TLB)

depends on technology / cost Simplicity often wins

Associativity

Cache Size

Block Size

Bad

Good

Less More

Factor A Factor B

Page 35: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

35

AM

LaCASA

How to Improve Cache Performance?

Cache optimizations 1. Reduce the miss rate 2. Reduce the miss penalty 3. Reduce the time to hit in the cache

yMissPenaltMissRateHitTimeAMAT

Page 36: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

36

AM

LaCASA

Where Misses Come From?

Classifying Misses: 3 Cs Compulsory — The first access to a block is not in the cache,

so the block must be brought into the cache. Also called cold start misses or first reference misses.(Misses in even an Infinite Cache)

Capacity — If the cache cannot contain all the blocks needed during execution of a program, capacity misses will occur due to blocks being discarded and later retrieved.(Misses in Fully Associative Size X Cache)

Conflict — If block-placement strategy is set associative or direct mapped, conflict misses (in addition to compulsory & capacity misses) will occur because a block can be discarded and later retrieved if too many blocks map to its set. Also called collision misses or interference misses.(Misses in N-way Associative, Size X Cache)

More recent, 4th “C”: Coherence — Misses caused by cache coherence.

Page 37: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

37

AM

LaCASA

3Cs Absolute Miss Rate (SPEC92)

Cache Size (KB)

Mis

s R

ate

per

Typ

e

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

1 2 4 8

16

32

64

12

8

1-way

2-way

4-way

8-way

Capacity

Compulsory

Conflict

- 8-way: conflict misses due to going from fully associative to 8-way assoc.- 4-way: conflict misses due to going from 8-way to 4-way assoc.- 2-way: conflict misses due to going from 4-way to 2-way assoc. - 1-way: conflict misses due to going from 2-way to 1-way assoc. (direct mapped)

Page 38: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

38

AM

LaCASA

3Cs Relative Miss Rate

Cache Size (KB)

Mis

s R

ate

per

Typ

e

0%

20%

40%

60%

80%

100%1 2 4 8

16

32

64

12

8

1-way

2-way4-way

8-way

Capacity

Compulsory

Conflict

Page 39: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

39

AM

LaCASA

Cache Organization?

Assume total cache size not changed What happens if: Change Block Size Change Cache Size Change Cache Internal Organization Change Associativity Change Compiler Which of 3Cs is obviously affected?

Page 40: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

40

AM

LaCASA

Block Size (bytes)

Miss Rate

0%

5%

10%

15%

20%

25%

16

32

64

12

8

25

6

1K

4K

16K

64K

256K

1st Miss Rate Reduction Technique: Larger Block Size

Reduced compulsory

misses

IncreasedConflictMisses

Page 41: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

41

AM

LaCASA

1st Miss Rate Reduction Technique: Larger Block Size (cont’d)

Example: Memory system takes 40 clock cycles of overhead, and then delivers 16

bytes every 2 clock cycles Miss rate vs. block size (see table); hit time is 1 cc AMAT? AMAT = Hit Time + Miss Rate x Miss Penalty

Cache Size

BS 1K 4K 16K 64K 256K

16 15.05 8.57 3.94 2.04 1.09

32 13.34 7.24 2.87 1.35 0.70

64 13.76 7.00 2.64 1.06 0.51

128 16.64 7.78 2.77 1.02 0.49

256 22.01 9.51 3.29 1.15 0.49

Cache Size

BS MP 1K 4K 16K 64K 256K

16 42 7.32 4.60 2.66 1.86 1.46

32 44 6.87 4.19 2.26 1.59 1.31

64 48 7.61 4.36 2.27 1.51 1.25

128 56 10.32 5.36 2.55 1.57 1.27

256 72 16.85 7.85 3.37 1.83 1.35

Block size depends on both latency and bandwidth of lower level memory

low latency and bandwidth => decrease block size high latency and bandwidth => increase block size

Page 42: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

42

AM

LaCASA

2nd Miss Rate Reduction Technique: Larger Caches

Reduce Capacity misses Drawbacks: Higher cost, Longer hit time

Cache Size (KB)

Mis

s R

ate

per

Typ

e

0

0.02

0.04

0.06

0.08

0.1

0.12

0.141 2 4 8

16

32

64

12

8

1-way

2-way

4-way

8-way

Capacity

Compulsory

Page 43: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

43

AM

LaCASA

3rd Miss Rate Reduction Technique: Higher Associativity

Miss rates improve with higher associativity Two rules of thumb

8-way set-associative is almost as effective in reducing misses as fully-associative cache of the same size

2:1 Cache Rule: Miss Rate DM cache size N = Miss Rate 2-way cache size N/2

Beware: Execution time is only final measure! Will Clock Cycle time increase? Hill [1988] suggested hit time for 2-way vs. 1-way

external cache +10%, internal + 2%

Page 44: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

44

AM

LaCASA

Cache Size (KB)

Mis

s R

ate

per

Typ

e

0

0.02

0.04

0.06

0.08

0.1

0.12

0.141 2 4 8

16

32

64

12

8

1-way

2-way

4-way

8-way

Capacity

Compulsory

3rd Miss Rate Reduction Technique: Higher Associativity (2:1 Cache Rule)

Conflict

Miss rate 1-way associative cache size X = Miss rate 2-way associative cache size X/2

Page 45: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

45

AM

LaCASA

3rd Miss Rate Reduction Technique: Higher Associativity (cont’d)

Example CCT2-way = 1.10 * CCT1-way,

CCT4-way = 1.12 * CCT1-way, CCT8-way = 1.14 * CCT1-way Hit time = 1 cc, Miss penalty = 50 cc Find AMAT using miss rates from Fig 5.9 (old textbook)

CSize [KB]1-way 2-way 4-way 8-way

1 7.65 6.60 6.22 5.442 5.90 4.90 4.62 4.094 4.60 3.95 3.57 3.198 3.30 3.00 2.87 2.59

16 2.45 2.20 2.12 2.0432 2.00 1.80 1.77 1.7964 1.70 1.60 1.57 1.59

128 1.50 1.45 1.42 1.44

Page 46: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

46

AM

LaCASA

4th Miss Rate Reduction Technique: Way Prediction, “Pseudo-Associativity”

How to combine fast hit time of Direct Mapped and have the lower conflict misses of 2-way SA cache?

Way Prediction: extra bits are kept to predict the way or block within a set Mux is set early to select the desired block Only a single tag comparison is performed What if miss?

=> check the other blocks in the set Used in Alpha 21264 (1 bit per block in IC$)

1 cc if predictor is correct, 3 cc if not Effectiveness: prediction accuracy is 85%

Used in MIPS 4300 embedded proc. to lower power

Page 47: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

47

AM

LaCASA

4th Miss Rate Reduction Technique: Way Prediction, Pseudo-Associativity

Pseudo-Associative Cache Divide cache: on a miss, check other half of cache to see if

there, if so have a pseudo-hit (slow hit) Accesses proceed just as in the DM cache for a hit On a miss, check the second entry

Simple way is to invert the MSB bit of the INDEX field to find the other block in the “pseudo set”

What if too many hits in the slow part? swap contents of the blocks

Hit Time

Pseudo Hit Time Miss Penalty

Time

Page 48: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

48

AM

LaCASA

Example: Pseudo-Associativity

Compare 1-way, 2-way, and pseudo associative organizations for 2KB and 128KB caches

Hit time = 1cc, Pseudo hit time = 2cc Parameters are the same as in the previous Exmp. AMATps. = Hit Timeps. + Miss Rateps. x Miss Penaltyps. Miss Rateps. = Miss Rate2-way Hit timeps. = Hit timeps. + Alternate hit rateps. x 2 Alternate hit rateps. = Hit rate2-way - Hit rate1-way =

Miss rate1-way - Miss rate2-way CSize [KB]

1-way 2-way Pseudo2 5.90 4.90 4.844

128 1.50 1.45 1.356

Page 49: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

49

AM

LaCASA

5th Miss Rate Reduction Technique:Compiler Optimizations

Reduction comes from software (no Hw ch.) McFarling [1989] reduced caches misses by 75%

(8KB, DM, 4 byte blocks) in software Instructions

Reorder procedures in memory so as to reduce conflict misses Profiling to look at conflicts(using tools they developed)

Data Merging Arrays: improve spatial locality by single array of

compound elements vs. 2 arrays Loop Interchange: change nesting of loops to access data in

order stored in memory Loop Fusion: Combine 2 independent loops that have same

looping and some variables overlap Blocking: Improve temporal locality by accessing “blocks” of data

repeatedly vs. going down whole columns or rows

Page 50: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

50

AM

LaCASA

Loop Interchange

Motivation: some programs have nested loops that access data in nonsequential order

Solution: Simply exchanging the nesting of the loops can make the code access the data in the order it is stored => reduce misses by improving spatial locality; reordering maximizes use of data in a cache block before it is discarded

Page 51: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

51

AM

LaCASA

Loop Interchange Example/* Before */for (k = 0; k < 100; k = k+1)

for (j = 0; j < 100; j = j+1)for (i = 0; i < 5000; i = i+1)

x[i][j] = 2 * x[i][j];/* After */for (k = 0; k < 100; k = k+1)

for (i = 0; i < 5000; i = i+1)for (j = 0; j < 100; j = j+1)

x[i][j] = 2 * x[i][j];

Sequential accesses instead of striding through memory every 100 words; improved spatial locality.

Reduces misses if the arrays do not fit in the cache.

Page 52: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

52

AM

LaCASA

Blocking

Motivation: multiple arrays, some accessed by rows and some by columns

Storing the arrays row by row (row major order) or column by column (column major order) does not help: both rows and columns are used in every iteration of the loop (Loop Interchange cannot help)

Solution: instead of operating on entire rows and columns of an array, blocked algorithms operate on submatrices or blocks => maximize accesses to the data loaded into the cache before the data is replaced

Page 53: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

53

AM

LaCASA

Blocking Example

/* Before */

for (i = 0; i < N; i = i+1)

for (j = 0; j < N; j = j+1)

{r = 0;

for (k = 0; k < N; k = k+1){

r = r + y[i][k]*z[k][j];};

x[i][j] = r;

};

Two Inner Loops: Read all NxN elements of z[] Read N elements of 1 row of y[] repeatedly Write N elements of 1 row of x[]

Capacity Misses - a function of N & Cache Size: 2N3 + N2 => (assuming no conflict; otherwise …)

Idea: compute on BxB submatrix that fits

Page 54: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

54

AM

LaCASA

Blocking Example (cont’d)

B called Blocking Factor Capacity Misses from 2N3 + N2 to N3/B+2N2

Conflict Misses Too?

/* After */

for (jj = 0; jj < N; jj = jj+B)

for (kk = 0; kk < N; kk = kk+B)

for (i = 0; i < N; i = i+1)

for (j = jj; j < min(jj+B,N); j = j+1)

{r = 0;

for (k = kk; k < min(kk+B,N); k = k+1) {

r = r + y[i][k]*z[k][j];};

x[i][j] = x[i][j] + r;

};

Page 55: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

55

AM

LaCASA

Merging Arrays

Motivation: some programs reference multiple arrays in the same dimension with the same indices at the same time =>these accesses can interfere with each other,leading to conflict misses

Solution: combine these independent matrices into a single compound array, so that a single cache block can contain the desired elements

Page 56: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

56

AM

LaCASA

Merging Arrays Example

/* Before: 2 sequential arrays */int val[SIZE];int key[SIZE];

/* After: 1 array of stuctures */struct merge {

int val;int key;

};struct merge merged_array[SIZE];

Page 57: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

57

AM

LaCASA

Loop Fusion

Some programs have separate sections of code that access with the same loops, performing different computations on the common data

Solution: “Fuse” the code into a single loop =>the data that are fetched into the cache can be used repeatedly before being swapped out => reducing misses via improved temporal locality

Page 58: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

58

AM

LaCASA

Loop Fusion Example

2 misses per access to a & c vs. one miss per access; improve temporal locality

/* Before */for (i = 0; i < N; i = i+1)

for (j = 0; j < N; j = j+1)a[i][j] = 1/b[i][j] * c[i][j];

for (i = 0; i < N; i = i+1)for (j = 0; j < N; j = j+1)

d[i][j] = a[i][j] + c[i][j];/* After */for (i = 0; i < N; i = i+1)

for (j = 0; j < N; j = j+1){ a[i][j] = 1/b[i][j] * c[i][j];

d[i][j] = a[i][j] + c[i][j];}

Page 59: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

59

AM

LaCASA

Summary of Compiler Optimizations to Reduce Cache Misses (by hand)

Performance Improvement

1 1.5 2 2.5 3

compress

cholesky

(nasa7)

spice

mxm (nasa7)

btrix (nasa7)

tomcatv

gmty (nasa7)

vpenta (nasa7)

merged

arrays

loop

interchange

loop fusion blocking

Page 60: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

60

AM

LaCASA

Summary: Miss Rate Reduction

3 Cs: Compulsory, Capacity, Conflict 1. Larger Cache => Reduce Capacity 2. Larger Block Size => Reduce Compulsory 3. Higher Associativity => Reduce Confilcts 4. Way Prediction & Pseudo-Associativity 5. Compiler Optimizations

rateClock

yMissPenaltMissRateInst

MemAccessCPIIC

timeCPUExec

Page 61: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

61

AM

LaCASA

Reducing Miss Penalty

Motivation AMAT = Hit Time + Miss Rate x Miss Penalty Technology trends =>

relative cost of miss penalties increases over time

Techniques that address miss penalties 1. Multilevel Caches 2. Critical Word First and Early Restart 3. Giving Priority to Read Misses over Writes 4. Merging Write Buffer 5. Victim Caches

Page 62: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

62

AM

LaCASA

1st Miss Penalty Reduction Technique: Multilevel Caches

Architect’s dilemma Should I make the cache faster to keep pace with the speed of CPUs Should I make the cache larger to overcome the widening gap between

CPU and main memory L2 Equations

AMAT = Hit TimeL1 + Miss RateL1 x Miss PenaltyL1 Miss PenaltyL1 = Hit TimeL2 + Miss RateL2 x Miss PenaltyL2 AMAT = Hit TimeL1 +

Miss RateL1 x (Hit TimeL2 + Miss RateL2 + Miss PenaltyL2) Definitions:

Local miss rate— misses in this cache divided by the total number of memory accesses to this cache (Miss rateL2)

Global miss rate—misses in this cache divided by the total number of memory accesses generated by the CPU (Miss RateL1 x Miss RateL2)

Global Miss Rate is what matters

Page 63: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

63

AM

LaCASA

1st Miss Penalty Reduction Technique: Multilevel Caches

Global vs. Local Miss Rate Relative Execution Time 1.0 is 8MB L2, 1cc hit

Page 64: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

64

AM

LaCASA

Reducing Misses: Which apply to L2 Cache?

Reducing Miss Rate 1. Reduce Capacity Misses via Larger Cache 2. Reduce Compulsory Misses via Larger

Block Size 3. Reduce Conflict Misses via Higher

Associativity 4. Reduce Conflict Misses via Way Prediction

& Pseudo-Associativity 5. Reduce Conflict/Capac. Misses via

Compiler Optimizations

Page 65: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

65

AM

LaCASA

Relative CPU Time

Block Size

11.11.21.31.41.51.61.71.81.9

2

16 32 64 128 256 512

1.361.28 1.27

1.34

1.54

1.95

L2 cache block size & A.M.A.T.

32KB L1, 8 byte path to memory

Page 66: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

66

AM

LaCASA

Multilevel Inclusion: Yes or No?

Inclusion property:L1 data are always present in L2

Good for I/O & caches consistency (L1 is usually WT, so valid data are in L2)

Drawback: What if measurements suggest smaller cache blocks for smaller L1 caches and larger blocks for larger L2 caches?

E.g., Pentium4: 64B L1 blocks, 128B L2 blocks Add complexity: when replace a block in L2 should discard 2

blocks in the L1 cache => increase L1 miss rate What if the budget for a L2 cache is slightly bigger than the L1

cache => L2 keeps redundant copy of L1 Multilevel Exclusion: L1 data is never found in a L2 cache E.g., AMD Athlon uses this:

64KB L1I$ + 64KB L1D$ vs. 256KB L2U$

Page 67: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

67

AM

LaCASA

2nd Miss Penalty Reduction Technique: Early Restart and Critical Word First

Don’t wait for full block to be loaded before restarting CPU Early restart—As soon as the requested word of the block

arrives, send it to the CPU and let the CPU continue execution Critical Word First—Request the missed word first from memory

and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block. Also called wrapped fetch and requested word first

Generally useful only in large blocks Problem of spatial locality: tend to want next sequential word,

so not clear if benefit by early restart and CWF

block

Page 68: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

68

AM

LaCASA

3rd Miss Penalty Reduction Technique: Giving Read Misses Priority over Writes

Tag

=?

2:1 Mux

CPUAddressData in

Data out

Write buffer

Lower level memory

Data

Delayed Write Buffer

Page 69: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

69

AM

LaCASA

3rd Miss Penalty Reduction Technique: Read Priority over Write on Miss (2)

Write-through with write buffers offer RAW conflicts with main memory reads on cache misses

If simply wait for write buffer to empty, might increase read miss penalty (old MIPS 1000 by 50% )

Check write buffer contents before read; if no conflicts, let the memory access continue

Write-back also want buffer to hold misplaced blocks Read miss replacing dirty block Normal: Write dirty block to memory, and then do the read Instead copy the dirty block to a write buffer, then do the read, and

then do the write CPU stall less since restarts as soon as do read

Example: DM, WT, 512 & 1024 map to the same block:SW 512(R0), R3 ; cache index 0LW R1, 1024(R0) ; cache index 0LW R2, 512(R0) ; cache index 0

Page 70: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

70

AM

LaCASA

4th Miss Penalty Reduction Technique: Merging Write Buffer

Write Through caches relay on write-buffers

on write, data and full address are written into the buffer; write is finished from the CPU’s perspective

Problem: WB full stalls Write merging

multiword writes are faster than a single word writes => reduce write-buffer stalls

Is this applicable to I/O addresses?

Page 71: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

71

AM

LaCASA

5th Miss Penalty Reduction Technique: Victim Caches

How to combine fast hit time of direct mapped yet still avoid conflict misses?

Idea: Add buffer to place data discarded from cache in the case it is needed again

Jouppi [1990]: 4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache

Used in Alpha, HP machines,AMD Athlon (8 entries)

To Next Lower Level InHierarchy

DATATAGS

One Cache line of DataTag and Comparator

One Cache line of DataTag and Comparator

One Cache line of DataTag and Comparator

One Cache line of DataTag and Comparator

Page 72: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

72

AM

LaCASA

Summary of Miss Penalty Reducing Techniques

1. Multilevel Caches 2. Critical Word First and Early Restart 3. Giving Priority to Read Misses over Writes 4. Merging Write Buffer 5. Victim Caches

Page 73: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

73

AM

LaCASA

Reducing Cache Miss Penalty or Miss Rate via Parallelism

Idea: overlap the execution of instructions with activity in memory hierarchy

Miss Rate/Penalty reduction techniques 1. Nonblocking caches

reduce stalls on cache misses in CPUs with out-of-order completion

2. Hardware prefetching of instructions and data

reduce miss penalty 3. Compiler controlled prefetching

Page 74: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

74

AM

LaCASA

Reduce Misses/Penalty: Non-blocking Caches to reduce stalls on misses

Non-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a miss

requires F/E bits on registers or out-of-order execution requires multi-bank memories

“hit under miss” reduces the effective miss penalty by working during miss vs. ignoring CPU requests

“hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses

Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses

Requires muliple memory banks (otherwise cannot support) Pentium Pro allows 4 outstanding memory misses

Page 75: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

75

AM

LaCASA

Value of Hit Under Miss for SPEC

Page 76: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

76

AM

LaCASA

Reducing Misses/Penalty by Hardware Prefetching of Instructions & Data

E.g., Instruction Prefetching Alpha 21064 fetches 2 blocks on a miss Extra block placed in “stream buffer” On miss check stream buffer

Works with data blocks too: Jouppi [1990] 1 data stream buffer got 25% misses

from 4KB cache; 4 streams got 43% Palacharla & Kessler [1994] for scientific programs for

8 streams got 50% to 70% of misses from 2 64KB, 4-way set associative caches

Prefetching relies on having extra memory bandwidth that can be used without penalty

Page 77: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

77

AM

LaCASA

Reducing Misses/Penalty by Software Prefetching Data

Data Prefetch Load data into register (HP PA-RISC loads) Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v.

9) Special prefetching instructions cannot cause faults; a form of

speculative execution Prefetching comes in two flavors:

Binding prefetch: Requests load directly into register. Must be correct address and register!

Non-Binding prefetch: Load into cache. Can be incorrect. Faults?

Issuing Prefetch Instructions takes time Is cost of prefetch issues < savings in reduced misses? Higher superscalar reduces difficulty of issue bandwidth

Page 78: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

78

AM

LaCASA

Review: Improving Cache Performance

1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

yMissPenaltMissRateHitTimeAMAT

Page 79: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

79

AM

LaCASA

1st Hit Time Reduction Technique: Small and Simple Caches

Smaller hardware is faster =>small cache helps the hit time

Keep the cache small enough to fit on the same chip as the processor (avoid the time penalty of going off-chip)

Keep the cache simple Use Direct Mapped cache:

it overlaps the tag check with the transmission of data

Page 80: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

80

AM

LaCASA

2nd Hit Time Reduction Technique: Avoiding Address Translation

CPU

TB

$

MEM

VA

PA

PA

ConventionalOrganization

CPU

$

TB

MEM

VA

VA

PA

Virtually Addressed CacheTranslate only on miss

Synonym Problem

CPU

$ TB

MEM

VA

PATags

PA

Overlap $ accesswith VA translation:requires $ index to

remain invariantacross translation

VATags

L2 $

Page 81: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

81

AM

LaCASA

2nd Hit Time Reduction Technique: Avoiding Address Translation (cont’d)

Send virtual address to cache? Called Virtually Addressed Cache or just Virtual Cache vs. Physical Cache

Every time process is switched logically must flush the cache; otherwise get false hits

Cost is time to flush + “compulsory” misses from empty cache Dealing with aliases (sometimes called synonyms);

Two different virtual addresses map to same physical address =>multiple copies of the same data in a a virtual cache

I/O typically uses physical addresses; if I/O must interact with cache, mapping to virtual addresses is needed

Solution to aliases HW solutions guarantee every cache block a unique physical address

Solution to cache flush Add process identifier tag that identifies process as well as address

within process: can’t get a hit if wrong process

Page 82: CPE 631 Caches Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar Milenkovic milenka@ece.uah.edu milenka@ece.uah.edu milenka

82

AM

LaCASA

Cache Optimization Summary

Technique MR MP HT Complexity

Larger Block Size + - 0

Higher Associativity + - 1

Victim Caches + 2

Pseudo-Associative Caches + 2

HW Prefetching of Instr/Data + + 2

Compiler Controlled Prefetching + + 3

Compiler Reduce Misses 0

Priority to Read Misses + 1

Early Restart & Critical Word 1st + 2

Non-Blocking Caches + 3

Second Level Caches + 2

Better memory system + 3

Small & Simple Caches - + 0

Avoiding Address Translation + 2

Pipelining Caches + 2