14
Dynamic Representation and Prefetching of Linked Data Structures (or Introspection for Low-Level Data Prefetching) Mark Whitney & John Kubiatowicz ROC Retreat 2002

Mark Whitney & John Kubiatowicz ROC Retreat 2002

Embed Size (px)

DESCRIPTION

Dynamic Representation and Prefetching of Linked Data Structures (or Introspection for Low-Level Data Prefetching). Mark Whitney & John Kubiatowicz ROC Retreat 2002. Memory Prefetching Problem. cache to execution latency: 1-3 compute cycles. prefetcher. on-chip data cache. - PowerPoint PPT Presentation

Citation preview

Page 1: Mark Whitney & John Kubiatowicz ROC Retreat 2002

Dynamic Representation and Prefetching of Linked Data

Structures(or Introspection for Low-Level Data

Prefetching)

Mark Whitney & John Kubiatowicz

ROC Retreat 2002

Page 2: Mark Whitney & John Kubiatowicz ROC Retreat 2002

on-chipdata cache

cache to execution latency: 1-3 compute cycles

main memory

main memory to cache latency: ~50 compute cycles

Memory Prefetching Problem

•Want data in cache before requested by execution core

•But all the data will not fit

•How to prefetch data in a timely manner w/o wasting cache space?

prefetcher

Page 3: Mark Whitney & John Kubiatowicz ROC Retreat 2002

struct node_t {

int datum1;

char *datum2;

struct node_t *next;

};

struct node_t *node;

.

.

.

for (node = head; node != NULL; node = node->next) {

if (key1 == datum1 &&

strcmp(key2,datum2))

break;

}

0x1000

0x1004

0x1008

2500

3000

0x2500

0x25040x2508

0x3000

0x30040x3008

10001

“ford”

2

“trillian”

3

“marvin”

Example linked list and its traversal

•Address accesses: 1000, 1004, 1008, 2500, 2504, 2508, 3000… obvious structure not apparent in simple list of memory locations

Page 4: Mark Whitney & John Kubiatowicz ROC Retreat 2002

Prefetching Linked Data Structures

•In Olden benckmark suite, pointer loads contributed 50-90% to data cache misses

•Linked data structure elements can occupy non-contiguous portions of memory

•Prefetching sequential blocks of memory is not effective

•Solution: prefetch elements of data structure sequentially•Use information from element just fetched to find next element to prefetch

•Split prefetching task into two parts:•Picking out the structure of linked data elements and build a representation of the data structure•Prefetching this data structure effectively using representation

Page 5: Mark Whitney & John Kubiatowicz ROC Retreat 2002

for (node = head; node != NULL; node = node->next) {

if (key1 == datum1 && strcmp(key2,datum2))

break;

}

loop: lw $10,0($16) lw $4,4($16) bne $10,$6,endif jal strcmp bne $7,$0,endif j outloopendif: lw $16,8($16) bne $16,$0,loopoutloop:

lw $16,8($16)lw $10,0($16)lw $4,4($16)lw $16,8($16)lw $10,0($16)lw $4,4($16)lw $16,8($16)lw $10,0($16)lw $4,4($16)

node root

datum1 datum2

8

40

How do we get here?

?

Loop in C Compiled loop

Dynamic memory accesses

3

Linked Data Structure Manifestation

Page 6: Mark Whitney & John Kubiatowicz ROC Retreat 2002

•Problem: could be up to 50 unrelated load instructions for every load instruction relevant to a particular data structure.

•Solution: extract load dependencies through register use analysis and analysis of store memory addresses.

$16 0x118

registerproducing instruction address 0x118 lw $16,8($16)

$3 0x100

registerproducinginstructionaddress

$16 0x118

0x100 lw $3,0($16)Found dependencybetween loads 0x118& 0x100

Register use analysis

0x124 lw $5,64($10)

0x12c lw $2,0($11)

Page 7: Mark Whitney & John Kubiatowicz ROC Retreat 2002

•Spill table: same idea, keeps track of loads that produced values which are temporarily saved to memory

•Keep another table of load instruction addresses and the values they produce and we get a chain of dependent loads:

.

.lw $16,8($16)lw $10,0($16)lw $4,4($16)lw $16,8($16)lw $10,0($16)lw $4,4($16)lw $16,8($16)lw $10,0($16)lw $4,4($16)

.

.

offset:8 offset:0 offset:4 offset:8 ……

Dependency arcs

Data flow graph of dependent loads

Page 8: Mark Whitney & John Kubiatowicz ROC Retreat 2002

•Chain of dependencies is still linear in the number of memory accesses•Apply compression to dependency chain

•SEQUITUR compression algorithm•Produces hierarchical grammar:

offset:8 offset:0 offset:4 offset:8 ……

rule1 3

rule1=offset:8 lds back:3 offset:0 lds back:1 offset:4 lds back:2

lw $16,8($16)lw $10,0($16)lw $4,4($16)lw $16,8($16)lw $10,0($16)lw $4,4($16)lw $16,8($16)lw $10,0($16)lw $4,4($16)

Compression of data flow graph

Page 9: Mark Whitney & John Kubiatowicz ROC Retreat 2002

Data Structure found in Benchmark

while (list != NULL) {

prefcount++;

i = village->hosp.free_personnel;

p = list->patient; /* This is a bad load */

if (i > 0) {

village->hosp.free_personnel--;

p->time_left = 3;

p->time += p->time_left;

removeList(&(village->hosp.waiting), p);

addList(&(village->hosp.assess), p);

}

else {

p->time++; /* so is this */

}

list = list->forward;

}

Loop from “health” benchmark in Olden suite Representation generated

rule1 10

rule1=offset:8 lds back:3 offset:0 lds back:1 offset:4 lds back:1

Page 10: Mark Whitney & John Kubiatowicz ROC Retreat 2002

Prefetching

•Dependent load chains with compression provide compact representations of linked data structures.

•Representations that compress well can be cached and then used to later prefetch the data structure.

•To prefetch, unroll compressed representation; grab the root pointer value from the stream of incoming load instructions rule1: offset:8 lds back:3 initiating load instruction produces 1000 offset:0 lds back:1 fetch data at memory location 0x1000 (gives 1) offset:4 lds back:2 “ “ 0x1004 (gives“ford”)rule1: offset:8 lds back:3 “ “ 0x1008 (gives 2500) offset:0 lds back:1 “ “ 0x2500 (gives 3000) offset:4 lds back:2 “ “ 0x2504 (gives 2)

.

.

Page 11: Mark Whitney & John Kubiatowicz ROC Retreat 2002

0x118 lw $16,8($16)0x100 lw $10,0($16)0x104 lw $4,4($16)0x118 lw $16,8($16)0x100 lw $10,0($16)0x104 lw $4,4($16)0x118 lw $16,8($16)0x100 lw $10,0($16)0x104 lw $4,4($16)

rule1 3

rule1=offset:8 lds back:3 offset:0 lds back:1 offset:4 lds back:2

0x118

Representation cache

0x118 lw $16,8($16) returns value 4000

prefetch mem location 0x4000prefetch mem location 0x4004prefetch mem location 0x4008prefetch mem location 0x4100

.

.

When the load instruction with address 0x118 is executed again, initiate prefetch, grab result of initiating prefetch as the first memory location prefetched.

Representations are cached on first load instruction address

Prefetch flow

Page 12: Mark Whitney & John Kubiatowicz ROC Retreat 2002

“Introspective” System Architecture for Prefetcher

Representation building and prefetching tasks are rather modularso could split the tasks up on different processors.

compute

build

prefetch

0x118 lw $16,8($16)0x100 lw $3,0($16)

rule1 3rule1=offset:8 lds back:3 offset:0 lds back:1 offset:4 lds back:2start value=1000

prefetch 0x1000prefetch 0x1004

prefetch result 1prefetch result “ford”

Page 13: Mark Whitney & John Kubiatowicz ROC Retreat 2002

Preliminary results: correct, timely prefetches?

Most prefetchesare later accessedby regular instructions.Great!

Unfortuately,prefetched data isusually already there.Too bad!

Accurate vs. inaccurate prefetches to caches in health benchmark

1

10

100

1000

10000

100000

1000000

to L1 cache to L2 cache

Cache level

Nu

mb

er o

f p

refe

tch

es

Accurate prefetches

Inaccurate prefetches

Late vs. On-time prefetches to caches in health benchmark

1

10

100

1000

10000

100000

1000000

to L1 cache to L2 cache

Cache level

Nu

mb

er o

f p

refe

tch

es

On-time prefetches

Late prefetches

Page 14: Mark Whitney & John Kubiatowicz ROC Retreat 2002

Representations work, prefetches late...

• Representation phase of the algorithm seems to work reasonably well for linked lists.

• Works less well on trees & arrays• Provides no performance gains due not

enough prefetching and late prefetches– More work being done to start prefetching

farther along in data structure