48
ECE, University of Arizona Shared Memory Multiprocessors Avinash Karanth Kodi Department of Electrical and Computer Engineering University of Arizona, Tucson, AZ – 85721 E-mail: [email protected] ECE 568: Introduction to Parallel Processing

Shared Memory Multiprocessors

  • Upload
    signa

  • View
    34

  • Download
    1

Embed Size (px)

DESCRIPTION

Shared Memory Multiprocessors. Avinash Karanth Kodi Department of Electrical and Computer Engineering University of Arizona, Tucson, AZ – 85721 E-mail: [email protected] ECE 568: Introduction to Parallel Processing. A collection of communicating processors - PowerPoint PPT Presentation

Citation preview

Page 1: Shared Memory Multiprocessors

ECE, University of Arizona

Shared Memory Multiprocessors

Avinash Karanth KodiDepartment of Electrical and Computer Engineering

University of Arizona, Tucson, AZ – 85721E-mail: [email protected]

ECE 568: Introduction to Parallel Processing

Page 2: Shared Memory Multiprocessors

ECE, University of Arizona 2

• A collection of communicating processors– Goals: balance load, reduce inherent communication and extra work

• A multi-cache, multi-memory system– Role of these components essential regardless of programming model– Prog. model and comm. abstr. affect specific performance tradeoffs

P P P

P P P

...

...

What is a Multiprocessor

Page 3: Shared Memory Multiprocessors

ECE, University of Arizona 3

Natural Extensions of Memory System

P1

Switch

Main memory

Pn

(Interleaved)

(Interleaved)

First-level $

P1

$

Interconnection network

$

Pn

Mem Mem

P1

$

Interconnection network

$

Pn

Mem MemShared Cache

Centralized MemoryDance Hall, UMA

Distributed Memory (NUMA)

Page 4: Shared Memory Multiprocessors

ECE, University of Arizona 4

Bus-based Symmetric Multiprocessors (SMPs)

MemoryModule

MemoryModule

MemoryModule

MemoryModule

Processor 1

Cache

Processor 3

Cache

Processor 2

Cache

Processor 4

Cache

Shared Bus

Shared lineOwned/Dirty line

• Dominate the server market ($60 Billion market)

• Attractive as throughput servers and for parallel programs– Fine-grain resource sharing– Uniform access via loads/stores– Automatic data movement and coherent replication in caches– Cheap and powerful extension

• Normal uniprocessor mechanisms to access data– Key is extension of memory hierarchy to support multiple processors

Page 5: Shared Memory Multiprocessors

ECE, University of Arizona 5

Caches are critical for performance

• Reduce average latency– automatic replication closer to

processor

• Reduce average bandwidth• Data is logically transferred

from producer to consumer to memory– store reg --> mem– load reg <-- mem

P P P

• What happens when store & load are executed on different processors?

• Many processor can shared data efficiently

Page 6: Shared Memory Multiprocessors

ECE, University of Arizona 6

$ $ $

u:5

P1 P2 P3

Memory

3

u = 74

u = ?

5

u = ?

1

u: 5

2

u: 5

Replicas in the caches of multiple processors in an SMP have to be updated or kept coherent

Cache Coherence Problem in SMPs

Page 7: Shared Memory Multiprocessors

ECE, University of Arizona 7

Cache Coherence Problem

• Caches play key role in all cases– Reduce average data access time

– Reduce bandwidth demands placed on shared interconnect

• Private processor caches create a problem– Copies of a variable can be present in multiple caches

– A write by one processor may not become visible to others

• They’ll keep accessing stale value in their caches

– Cache coherence problem

– data sharing, I/O Operations, Process Migration

• What do we do about it?– Organize the memory hierarchy to make it go away

– Detect and take actions to eliminate the problem

Page 8: Shared Memory Multiprocessors

ECE, University of Arizona 8

Intuitive Memory Model & Coherence Protocols

• Reading an address should return the last value written to that address

• Easy in uniprocessors– except for I/O

• Cache coherence problem in MPs is more pervasive and more performance critical

2 ways of maintaining caches coherence• Invalidate-based Protocols: invalidate replicas if a

processor wants to write to a location• Write-Update Protocols: Update replicas with the written

value

Page 9: Shared Memory Multiprocessors

ECE, University of Arizona 9

Definition of a Cache Coherent System

• A multiprocessor system is coherent: if the results of any execution of a program are such that, for each location, it is possible to construct a hypothetical total order of all memory accesses that is consistent with the results of the execution

• a read by a processor P to a location X that follows a write by P to X with no writes of X by another processor occurring between the write and the read by P, always returns the value written by P

• a read by a processor to location X that follows a write by another processor to X returns the written value if the read and write are sufficiently separated in time and no other writes to X occur between the two accesses

• writes to the same location are “serialized” i.e. if two writes to the same memory location by any 2 processors are seen in the same order by ALL processors

Page 10: Shared Memory Multiprocessors

ECE, University of Arizona 10

Cache Coherence Properties

Key Properties: - Write Propagation : The propagation of writes by any processor should become visible to all other processors

- Write Serialization : All writes (from same or different processors) are seen in the same order by all processors

2 classes of Protocols - Snoopy Protocols – these are for bus-based systems (SMPs)

- Directory-based Protocols – for large scale multiprocessors (point-to point interconnects)

Page 11: Shared Memory Multiprocessors

ECE, University of Arizona 11

A snooping protocol is a distributed algorithm represented by a collection of co-operating finite state machines. It is specified by thefollowing components:

• the set of states associated with memory blocks in the local caches• the state transition diagram with the following input symbols - Processor Request - Bus Transactions• the actions associated with each state transition

• The different states are co-ordinated by the bus transactions

Definition of Snoopy Protocol

Page 12: Shared Memory Multiprocessors

ECE, University of Arizona 12

• Bus is a broadcast medium & Caches know what they have• Cache Controller “snoops” all transactions on the shared

bus– relevant transaction if for a block it contains– take action to ensure coherence

• invalidate, update, or supply value

– depends on state of the block and the protocol

StateAddressData

I/O devicesMem

P1

$

Bus snoop

$

Pn

Cache-memorytransaction

Bus-based Snoopy Protocol

Page 13: Shared Memory Multiprocessors

ECE, University of Arizona 13

I/O devices

Memory

P1

$ $ $

P2 P3

5

u = ?

4

u = ?

u :51

u :5

2

u :5

3

u = 7

Example: Write-Through Invalidate

• Cache controllers can snoop on the bus• All bus transactions are visible to all cache controllers• All controllers see the transactions in the same order• Controllers can take action if the bus transaction is relevant i.e. involves a memory block in its cache• Coherence is maintained at the granularity of a cache block

Page 14: Shared Memory Multiprocessors

ECE, University of Arizona 14

• Invalidation Protocols: invalidate replicas if a processor writes a location• Update Protocols: update replicas with the written value

Based on:• Bus transactions with 3 phases - Bus arbitration - Command and address transmission - Data Transfer

• FSM State Transitions for a cache block - State information (eg. invalid, valid, dirty) is available for blocks in a cache - State information for uncached blocks is implicitly defined (eg. invalid or not present)

Architectural Building Blocks

Page 15: Shared Memory Multiprocessors

ECE, University of Arizona 15

• Controller updates state of blocks in response to processor and snoop events and generates bus transactions

• Snoopy protocol– set of states– state-transition diagram– actions

• Basic Choices– Write-through vs Write-back– Invalidate vs. Update

Snoop

State Tag Data

° ° °

Cache Controller

ProcessorLd/St

Design Choices

Page 16: Shared Memory Multiprocessors

ECE, University of Arizona 16

• Two states per block in each cache– as in uniprocessor– state of a block is a p-vector of states– Hardware state bits associated with

blocks that are in the cache – other blocks can be seen as being in

invalid (not-present) state in that cache

• Writes invalidate all other caches– can have multiple simultaneous

readers of block,but write invalidates them

I

VBusWr / -

PrRd/ --PrWr / BusWr

PrWr / BusWr

PrRd / BusRd

State Tag Data

I/O devicesMem

P1

$ $

Pn

Bus

State Tag Data

Write-through Invalidate Protocol

Page 17: Shared Memory Multiprocessors

ECE, University of Arizona 17

• State machinefor CPU requestsfor each cache block

InvalidShared

(read/only)

Modified(read/write)

CPU Read

CPU Write

CPU Read hit

Place read misson bus

Place Write Miss on bus

CPU read missWrite back block,Place read misson bus

CPU WritePlace Write Miss on Bus

CPU Read missPlace read miss on bus

CPU Write MissWrite back cache blockPlace write miss on bus

CPU read hitCPU write hit

Cache BlockState

MSI Protocol (1/3)

Page 18: Shared Memory Multiprocessors

ECE, University of Arizona 18

• State machinefor bus requests for each cache block

InvalidShared

(read/only)

Modified(read/write)

Write BackBlock; (abortmemory access)

Write miss for this block

Read miss for this block

Write miss for this block

Write BackBlock; (abortmemory access)

MSI Protocol (2/3)

Page 19: Shared Memory Multiprocessors

ECE, University of Arizona 19

Place read misson bus

• State machinefor CPU requestsfor each cache block and for bus requests for each cache block

InvalidShared

(read/only)

Modified(read/write)

CPU Read

CPU Write

CPU Read hit

Place Write Miss on bus

CPU read missWrite back block,Place read misson bus CPU Write

Place Write Miss on Bus

CPU Read missPlace read miss on bus

CPU Write MissWrite back cache blockPlace write miss on bus

CPU read hitCPU write hit

Cache BlockState

Write miss for this block

Write BackBlock; (abortmemory access)

Write miss for this block

Read miss for this block

Write BackBlock; (abortmemory access)

MSI Protocol (3/3)

Page 20: Shared Memory Multiprocessors

ECE, University of Arizona 20

P1 P2 Bus Memorystep State Addr Value State Addr Value Action Proc. Addr Value Addr Value

P1: Write 10 to A1P1: Read A1P2: Read A1

P2: Write 20 to A1P2: Write 40 to A2

Assumes initial cache state is invalid and A1 and A2 map to same cache block,but A1 != A2

Processor 1 Processor 2 Bus Memory

Remote Write

Write Back

Remote Write

Invalid Shared

Exclusive

CPU Read hit

Read miss on bus

Write miss on bus CPU Write

Place Write Miss on Bus

CPU read hitCPU write hit

Remote Read Write Back

CPU Write MissWrite Back

CPU Read Miss

Example:

Page 21: Shared Memory Multiprocessors

ECE, University of Arizona 21

P1 P2 Bus Memorystep State Addr Value State Addr Value Action Proc. Addr Value Addr Value

P1: Write 10 to A1 Excl. A1 10 WrMs P1 A1P1: Read A1P2: Read A1

P2: Write 20 to A1P2: Write 40 to A2

Assumes initial cache state is invalid and A1 and A2 map to same cache block,but A1 != A2.Active arrow = Remote

WriteWrite Back

Remote Write

Invalid Shared

Exclusive

CPU Read hit

Read miss on bus

Write miss on bus CPU Write

Place Write Miss on Bus

CPU read hitCPU write hit

Remote Read Write Back

CPU Write MissWrite Back

CPU Read Miss

Example: Step 1

Page 22: Shared Memory Multiprocessors

ECE, University of Arizona 22

P1 P2 Bus Memorystep State Addr Value State Addr Value Action Proc. Addr Value Addr Value

P1: Write 10 to A1 Excl. A1 10 WrMs P1 A1P1: Read A1 Excl. A1 10P2: Read A1

P2: Write 20 to A1P2: Write 40 to A2

Assumes initial cache state is invalid and A1 and A2 map to same cache block,but A1 != A2

Remote Write

Write Back

Remote Write

Invalid Shared

Exclusive

CPU Read hit

Read miss on bus

Write miss on bus CPU Write

Place Write Miss on Bus

CPU read hitCPU write hit

Remote Read Write Back

CPU Write MissWrite Back

CPU Read Miss

Example: Step 2

Page 23: Shared Memory Multiprocessors

ECE, University of Arizona 23

P1 P2 Bus Memorystep State Addr Value State Addr Value Action Proc. Addr Value Addr Value

P1: Write 10 to A1 Excl. A1 10 WrMs P1 A1P1: Read A1 Excl. A1 10P2: Read A1 Shar. A1 RdMs P2 A1

Shar. A1 10 WrBk P1 A1 10 10Shar. A1 10 RdDa P2 A1 10 10

P2: Write 20 to A1 10P2: Write 40 to A2 10

10

Assumes initial cache state is invalid and A1 and A2 map to same cache block,but A1 != A2.

Remote Write

Write Back

Remote Write

Invalid Shared

Exclusive

CPU Read hit

Read miss on bus

Write miss on bus CPU Write

Place Write Miss on Bus

CPU read hitCPU write hit

Remote Read Write Back

A1A1

CPU Write MissWrite Back

CPU Read Miss

Example: Step 3

Page 24: Shared Memory Multiprocessors

ECE, University of Arizona 24

P1 P2 Bus Memorystep State Addr Value State Addr Value Action Proc. Addr Value Addr Value

P1: Write 10 to A1 Excl. A1 10 WrMs P1 A1P1: Read A1 Excl. A1 10P2: Read A1 Shar. A1 RdMs P2 A1

Shar. A1 10 WrBk P1 A1 10 10Shar. A1 10 RdDa P2 A1 10 10

P2: Write 20 to A1 Inv. Excl. A1 20 WrMs P2 A1 10P2: Write 40 to A2 10

10

Assumes initial cache state is invalid and A1 and A2 map to same cache block,but A1 != A2

Remote Write

Write Back

Remote Write

Invalid Shared

Exclusive

CPU Read hit

Read miss on bus

Write miss on bus CPU Write

Place Write Miss on Bus

CPU read hitCPU write hit

Remote Read Write Back

A1A1A1

CPU Write MissWrite Back

CPU Read Miss

Example: Step 4

Page 25: Shared Memory Multiprocessors

ECE, University of Arizona 25

Remote Write

Write Back

Remote Write

Invalid Shared

Exclusive

CPU Read hit

Read miss on bus

Write miss on bus CPU Write

Place Write Miss on Bus

CPU read hitCPU write hit

Remote Read Write Back

P1 P2 Bus Memorystep State Addr Value State Addr Value Action Proc. Addr Value Addr Value

P1: Write 10 to A1 Excl. A1 10 WrMs P1 A1P1: Read A1 Excl. A1 10P2: Read A1 Shar. A1 RdMs P2 A1

Shar. A1 10 WrBk P1 A1 10 10Shar. A1 10 RdDa P2 A1 10 10

P2: Write 20 to A1 Inv. Excl. A1 20 WrMs P2 A1 10P2: Write 40 to A2 WrMs P2 A2 10

Excl. A2 40 WrBk P2 A1 20 20

A1

A1

Assumes initial cache state is invalid and A1 and A2 map to same cache block,but A1 != A2

A1A1A1

CPU Write MissWrite Back

CPU Read Miss

Example: Step 5

Page 26: Shared Memory Multiprocessors

ECE, University of Arizona 26

• States- Invalid (I)- Shared (S): one or more

- Exclusive (E): one only- Dirty or Modified (M): one only

• Processor Events:- PrRd (read)- PrWr (write)

• Bus Transactions- BusRd: asks for copy with no intent to modify- BusRdX: asks for copy with intent to modify- BusWB: updates memory

• Actions- Update state, perform bus transaction, flush value onto bus

MESI Writeback Invalidation Protocol

Page 27: Shared Memory Multiprocessors

ECE, University of Arizona 27

PrWr/—

BusRd/Flush

PrRd/

BusRdX/Flush

PrWr/BusRdX

PrWr/—

PrRd/—

PrRd/—BusRd/Flush

E

M

I

S

PrRd

BusRd(S)

BusRdX/Flush

BusRdX/Flush

BusRd/Flush

PrWr/BusRdX

PrRd/BusRd (S)

• Invalid (I)

• Exclusive (E)

• Shared (S)

• Modified (M)

4-State MESI Protocol

Page 28: Shared Memory Multiprocessors

ECE, University of Arizona 28

• Coherence => Writes to a location become visible to all in the same order

• But when does a write become visible?

• How do we establish orders between a write and a read by different processors?– use event synchronization

– typically use more than one location!

Setup for Memory Consistency

Page 29: Shared Memory Multiprocessors

ECE, University of Arizona 29

Requirements for Memory Consistency (1/3)

P1 P2

A = 1; While( Flag == 0 ); /* spin idly */ Flag = 1; Print A;

/*Assume initial value of A and flag is 0*/

P1 P2

A = 1; print B; B = 2; print A;

/*Assume initial value of A and B are 0*/

Clearly, we need something more than coherence to give a shared address space a clear semantics, i.e. an ordering model that programmers can use to reason about possible results and hence the correctness of their programmers

Page 30: Shared Memory Multiprocessors

ECE, University of Arizona 30

Determines the total order such that:• It gives the same result • Operations by any particular process occur in the order they were issued• The value returned by each read operation is the value written by the last write operation to that location in the total order

• Coherence Protocol defines only properties for accesses to a single location

• Program need in addition, guaranteed properties for accesses to multiple locations

Requirements for Memory Consistency (2/3)

Page 31: Shared Memory Multiprocessors

ECE, University of Arizona 31

A memory consistency model for a shared address space specifies constraints on the order in which memory operations must appear to be performed (i.e. to become visible to the processors) with respect to one another.

• It includes operations to the same location or to different locations.

• Therefore, it subsumes coherence.

Requirements for Memory Consistency (3/3)

Page 32: Shared Memory Multiprocessors

ECE, University of Arizona 32

Definition (Lamport 1979): A multiprocessor is sequentially consistent if the result of any execution is the same as if

• the operations of all the processors were executed in some sequential order,

• and the operations of each individual processor occur in this sequence in the order specified by its program

• Two constraints: program order and atomicity of memory operations.

Sequential Consistency (1/3)

Page 33: Shared Memory Multiprocessors

ECE, University of Arizona 33

• Program Order - Memory operations of a process must appear to become visible - to itself & others - in program order

• Write Atomicity - Maintain a single sequential order among all operations to all memory locations

Sequential Consistency (2/3)

P0 P1 Pn

Memory

Page 34: Shared Memory Multiprocessors

ECE, University of Arizona 34

Sequential Consistency (3/3)

Result : (A, B) = (1, 0) allowed under SC (A, B) = (0, 2) NOT ALLOWED under SC

P1 P2

A = 1; print B; B = 2; print A;

/*Assume initial value of A and B are 0*/

• SC does not ensure mutual exclusion, synchronization primitives are required

Page 35: Shared Memory Multiprocessors

ECE, University of Arizona 35

Base Cache Coherence Design

• Single-level write-back cache

• Invalidation protocol

• One outstanding memory request per processor

• Atomic memory bus transactions

• For BusRd, BusRdX no intervening transactions allowed on bus

between issuing address and receiving data

• BusWB: address and data simultaneous and sinked by memory

system before any new bus request

• Atomic operations within process

• One finishes before next in program order starts

Base Cache Coherence Design

Page 36: Shared Memory Multiprocessors

ECE, University of Arizona 36

• Cache controller responsible for parts of a memory operation

• Uniprocessor: On a miss:

- Assert request for bus- Wait for bus grant- Drive address and command lines- Wait for command to be accepted by relevant device- Transfer data

• In snoop-based multiprocessor, cache controller must monitor

bus and processor

• Can view as two controllers: bus-side, and processor-side

• With single-level cache: dual tags or dual-ported tag RAM

• Responds to bus transactions when necessary

Cache Controller and Tags

Page 37: Shared Memory Multiprocessors

ECE, University of Arizona 37

Reporting Snoop Results: How?

• Collective response from caches must appear on bus• Example: in MESI protocol, need to know

- Is block dirty; i.e. should memory respond or not?- Is block shared; i.e. transition to E or S state on read miss?

• Three wired-OR signals- Shared: asserted if any cache has a copy- Dirty: asserted if some cache has a dirty copy

- needn’t know which, since it will do what’s necessary• Snoop-valid: asserted when OK to check other two signals– actually inhibit until OK to check• Illinois MESI requires priority scheme for cache-to-cache transfers• Which cache should supply data when in shared state?• Commercial implementations allow memory to provide data

Page 38: Shared Memory Multiprocessors

ECE, University of Arizona 38

Reporting Snoop Results: When?

As soon as possible, memory needs to know what to do. If none of thecaches has a dirty copy, memory has to fetch the data.• Three options:• Fixed number of clocks from address appearing on bus– Dual tags required to reduce contention with processor– Still must be conservative. Processor blocks access to tag memory on E -> M.

• Variable delay– Memory assumes cache will supply data till all say “sorry”– Less conservative, more flexible, more complex– Memory can fetch data and hold just in case (SGI Challenge)

• Immediately: Bit-per-block in memory– Main memory maintain a bit per block that indicates whether this block is modified in one of the caches.– Extra hardware complexity in commodity main memory system

Page 39: Shared Memory Multiprocessors

ECE, University of Arizona 39

• How to snoop with multi-level caches?

- independent bus snooping at every level (additional hardware:

snooper, pins, duplication of tags)

- maintain cache inclusion

• Requirements for Inclusion

- data in higher-level cache is subset of data in lower-level cache

- modified in higher-level => marked modified in lower-level

• Need to snoop lowest-level cache

- If L2 says not present (modified), then not so in L1 too

- If BusRd seen to block that is modified in L1, L2 itself knows this

• Inclusion is not always automatically preserved

Multi-level Cache Hierarchies

Page 40: Shared Memory Multiprocessors

ECE, University of Arizona 40

Violations in Inclusion

The two caches (L1, L2) may choose to replace different block• Differences in reference history- set-associative first-level cache with LRU replacement

• Split higher-level caches- instruction, data blocks go in different caches at L1, but may collide in L2

• Differences in block size

• But a common case works automatically- L1 direct-mapped, fewer sets than in L2, and block size same

Page 41: Shared Memory Multiprocessors

ECE, University of Arizona 41

Enhancements required to Cache Protocol

• Propagate bus transactions from L2 to L1

• Propagate flush and invalidations

• Propagate modified state from L1 to L2 on writes

• L2 cache must be updated before flush due to bus transaction

• Write-through L1, or modified-but-stale bit per block in L2 cache

• Dual cache tags less important: each cache is filter for other

• Explicitly maintaining inclusion property

Page 42: Shared Memory Multiprocessors

ECE, University of Arizona 42

Split bus transaction into request and response sub-transactions

• Separate arbitration for each phase

• Other transactions may intervene

• Improves bandwidth dramatically

• Response is matched to request

• Buffering between bus and cache controllers

Use multiple buses (address and data separately)

• To separate the address and data portions of the transaction

Further Enhancements

Page 43: Shared Memory Multiprocessors

ECE, University of Arizona 43

• Split-transaction BusesSeparate the address and data portions of the transaction

ArbitrationAddress

Request (a)

SnoopResponse (a)

Arbitration

DataTransfer (a)

AddressRequest (b)

ArbitrationAddressRequest

SnoopResponse (b)

DataTransfer (b)

AddressBus

SnoopLine

DataBus

Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6

Split Transaction Buses: Example

• Multiple BusesEvery bus snoops different portions of the memory. As the equation shows one can increase the snoop bandwidth as a cost:

Snoop Rate = Number of BusesBus Clock

Clocks/Snoop

Page 44: Shared Memory Multiprocessors

ECE, University of Arizona 44

Sun StarFire: Uses 4 address buses. For 13 or lower system boards, the maximum data capacity is limited by the crossbar. Beyond 13, it is limited by the snoop bandwidth

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 160

64

128

192

256

0

5,333

10,667 = Snoop Bandwidth

16,000

21,333

Byt

es P

er C

lock

Ban

dw

idth

at

83.3

-MH

z cl

ock

(M

Bp

s)Memory BandwidthSnooping capacityData-crossbar capacitywith random addresses

Data-crossbar limited

Courtesy of Alan Charlesworth, “STARFIRE: Extending the SMP Envelope”, IEEE Micro, Volume 18, Issue 1,Jan-Feb 1998 Page(s) : 39-49

System Boards

SnoopLimited

Problems in scaling SMPs: Starfire

Page 45: Shared Memory Multiprocessors

ECE, University of Arizona 45

Bandwidth Scaling: Sun Interconnects

Page 46: Shared Memory Multiprocessors

ECE, University of Arizona 46

Distributed Shared Memory Multiprocessors

P1

$

Interconnection network

$

Pn

Mem Mem

Distributed Memory (NUMA)• Separate Memory per Processor

• Local or Remote access via memory controller

• 1 Cache Coherency solution: non-cached pages

• Alternative: directory per cache that tracks state of every block in every cache

– Which caches have a copies of block, dirty vs. clean, ...

• Info per memory block vs. per cache block?

– PLUS: In memory => simpler protocol (centralized/one location)

– MINUS: In memory => directory is ƒ(memory size) vs. ƒ(cache size)

• Prevent directory as bottleneck? distribute directory entries with memory, each keeping track of which Procs have copies of their blocks

Page 47: Shared Memory Multiprocessors

ECE, University of Arizona 47

Directory Protocol

• Similar to Snoopy Protocol: Three states– Shared: ≥ 1 processors have data, memory up-to-date– Uncached (no processor hasit; not valid in any cache)– Exclusive: 1 processor (owner) has data;

memory out-of-date

• In addition to cache state, must track which processors have data when in the shared state (usually bit vector, 1 if processor has copy)

• Keep it simple(r):– Writes to non-exclusive data

=> write miss– Processor blocks until access completes– Assume messages received and acted upon in order sent

Page 48: Shared Memory Multiprocessors

ECE, University of Arizona 48

Directory Protocol

• No bus and don’t want to broadcast:– interconnect no longer single arbitration point– all messages have explicit responses

• Terms: typically 3 processors involved– Local node where a request originates– Home node where the memory location of an address resides– Remote node has a copy of a cache block, whether exclusive or

shared