50
Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Embed Size (px)

Citation preview

Page 1: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Distributed ComputationsMapReduce/Dryad

M/R slides adapted from those of Jeff Dean’s

Dryad slides adapted from those of Michael Isard

Page 2: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

What we’ve learnt so far

• Basic distributed systems concepts– Consistency (sequential, eventual)

– Concurrency

– Fault tolerance (recoverability, availability)

• What are distributed systems good for?– Better fault tolerance

• Better security?

– Increased storage/serving capacity • Storage systems, email clusters

– Parallel (distributed) computation (Today’s topic)

Page 3: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Why distributed computations?

• How long to sort 1 TB on one computer?– One computer can read ~60MB from disk– Takes ~1 days!!

• Google indexes 100 billion+ web pages – 100 * 10^9 pages * 20KB/page = 2 PB

• Large Hadron Collider is expected to produce 15 PB every year!

Page 4: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Solution: use many nodes!

• Cluster computing– Hundreds or thousands of PCs connected by high

speed LANs

• Grid computing– Hundreds of supercomputers connected by high

speed net

• 1000 nodes potentially give 1000X speedup

Page 5: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Distributed computations are difficult to program

• Sending data to/from nodes

• Coordinating among nodes

• Recovering from node failure

• Optimizing for locality

• Debugging

Same for all problems

Page 6: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

MapReduce• A programming model for large-scale computations

– Process large amounts of input, produce output– No side-effects or persistent state (unlike file system)

• MapReduce is implemented as a runtime library:– automatic parallelization– load balancing– locality optimization– handling of machine failures

Page 7: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

MapReduce design

• Input data is partitioned into M splits• Map: extract information on each split

– Each Map produces R partitions

• Shuffle and sort– Bring M partitions to the same reducer

• Reduce: aggregate, summarize, filter or transform• Output is in R result files

Page 8: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

More specifically…• Programmer specifies two methods:

– map(k, v) → <k', v'>*– reduce(k', <v'>*) → <k', v'>*

• All v' with same k' are reduced together, in order.

• Usually also specify:– partition(k’, total partitions) -> partition for k’

• often a simple hash of the key• allows reduce operations for different k’ to be

parallelized

Page 9: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Example: Count word frequencies in web pages

• Input is files with one doc per record

• Map parses documents into words– key = document URL– value = document contents

• Output of map:

“doc1”, “to be or not to be”

“to”, “1”“be”, “1”“or”, “1”…

Page 10: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Example: word frequencies• Reduce: computes sum for a key

• Output of reduce saved

“be”, “2”“not”, “1”“or”, “1”“to”, “2”

key = “or”values = “1”

“1”

key = “be”values = “1”, “1”

“2”

key = “to”values = “1”, “1”

“2”

key = “not”values = “1”

“1”

Page 11: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Example: Pseudo-codeMap(String input_key, String input_value): //input_key: document name //input_value: document contents for each word w in input_values: EmitIntermediate(w, "1");

Reduce(String key, Iterator intermediate_values): //key: a word, same for input and output //intermediate_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result));

Page 12: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

MapReduce is widely applicable

• Distributed grep

• Document clustering

• Web link graph reversal

• Detecting duplicate web pages

• …

Page 13: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

MapReduce implementation

• Input data is partitioned into M splits• Map: extract information on each split

– Each Map produces R partitions

• Shuffle and sort– Bring M partitions to the same reducer

• Reduce: aggregate, summarize, filter or transform• Output is in R result files, stored in a replicated,

distributed file system (GFS).

Page 14: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

MapReduce scheduling

• One master, many workers – Input data split into M map tasks (e.g. 64 MB)– R reduce tasks– Tasks are assigned to workers dynamically– E.g. M=200,000; R=4,000; workers=2,000

Page 15: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

MapReduce scheduling• Master assigns a map task to a free worker

– Prefers “close-by” workers when assigning task– Worker reads task input (often from local disk!)– Worker produces R local files containing intermediate

k/v pairs

• Master assigns a reduce task to a free worker – Worker reads intermediate k/v pairs from map workers– Worker sorts & applies user’s Reduce op to produce

the output

Page 16: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Parallel MapReduce

Map Map Map Map

Inputdata

Inputdata

Reduce

Shuffle

Reduce

Shuffle

Reduce

Shuffle

Partitioned output

Partitioned output

Master

Page 17: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

WordCount Internals• Input data is split into M map jobs

• Each map job generates in R local partitions

“doc1”, “to be or not to be”

“to”, “1”“be”, “1”“or”, “1”“not”, “1“to”, “1”

“be”,“1”

“not”,“1”“or”, “1”

R localpartitions

“doc234”, “do not be silly”

“do”, “1”“not”, “1”“be”, “1”“silly”, “1 “be”,“1”

R localpartitions

“not”,“1”

“do”,“1”

“to”,“1”,”1”Hash(“to”) %

R

Page 18: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

WordCount Internals• Shuffle brings same partitions to same reducer

“to”,“1”,”1”

“be”,“1”

“not”,“1”“or”, “1”

“be”,“1”

R localpartitions

R localpartitions

“not”,“1”

“do”,“1”

“to”,“1”,”1”

“do”,“1”

“be”,“1”,”1”

“not”,“1”,”1”“or”, “1”

Page 19: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

WordCount Internals• Reduce aggregates sorted key values pairs

“to”,“1”,”1”

“do”,“1”

“not”,“1”,”1”

“or”, “1”

“do”,“1”“to”, “2”

“be”,“2”

“not”,“2”“or”, “1”

“be”,“1”,”1”

Page 20: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

The importance of partition function

• partition(k’, total partitions) -> partition for k’– e.g. hash(k’) % R

• What is the partition function for sort?

Page 21: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Load Balance and Pipelining• Fine granularity tasks: many more map

tasks than machines– Minimizes time for fault recovery– Can pipeline shuffling with map execution– Better dynamic load balancing

• Often use 200,000 map/5000 reduce tasks w/ 2000 machines

Page 22: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Fault tolerance via re-execution

On worker failure:• Re-execute completed and in-progress map

tasks• Re-execute in progress reduce tasks• Task completion committed through masterOn master failure:• State is checkpointed to GFS: new master

recovers & continues

Page 23: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Avoid straggler using backup tasks• Slow workers drastically increase completion time

– Other jobs consuming resources on machine– Bad disks with soft errors transfer data very slowly– Weird things: processor caches disabled (!!)– An unusually large reduce partition

• Solution: Near end of phase, spawn backup copies of tasks– Whichever one finishes first "wins"

• Effect: Dramatically shortens job completion time

Page 24: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

MapReduce Sort Performance

• 1TB (100-byte record) data to be sorted

• 1700 machines

• M=15000 R=4000

Page 25: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

MapReduce Sort Performance

When can shuffle start?

When can reduce start?

Page 26: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Dryad

Slides adapted from those of Yuan Yu and Michael Isard

Page 27: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Dryad• Similar goals as MapReduce

– focus on throughput, not latency– Automatic management of scheduling,

distribution, fault tolerance

• Computations expressed as a graph– Vertices are computations– Edges are communication channels– Each vertex has several input and output edges

Page 28: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

WordCount in Dryad

CountWord:n

MergeSortWord:n

CountWord:n

DistributeWord:n

Page 29: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Why using a dataflow graph?

• Many programs can be represented as a distributed dataflow graph– The programmer may not have to know this

• “SQL-like” queries: LINQ

• Dryad will run them for you

Page 30: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Job = Directed Acyclic Graph

Processingvertices Channels

(file, pipe, shared memory)

Inputs

Outputs

Page 31: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Scheduling at JM

• General scheduling rules: – Vertex can run anywhere once all its inputs are

ready• Prefer executing a vertex near its inputs

– Fault tolerance• If A fails, run it again• If A’s inputs are gone, run upstream vertices again

(recursively)• If A is slow, run another copy elsewhere and use output

from whichever finishes first

Page 32: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Advantages of DAG over MapReduce

• Big jobs more efficient with Dryad– MapReduce: big job runs >=1 MR stages

• reducers of each stage write to replicated storage• Output of reduce: 2 network copies, 3 disks

– Dryad: each job is represented with a DAG• intermediate vertices write to local file

Page 33: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Advantages of DAG over MapReduce

• Dryad provides explicit join– MapReduce: mapper (or reducer) needs to read from

shared table(s) as a substitute for join– Dryad: explicit join combines inputs of different types– E.g. Most expensive product bought by a customer,

PageRank computation

Page 34: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

DAG optimizations: merge tree

Page 35: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

DAG optimizations: merge tree

Page 36: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Dryad Optimizations: data-dependent re-partitioning

Distribute to equal-sized ranges

Sample to estimate histogram

Randomly partitioned inputs

Page 37: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Dryad example:the usefulness of join

• SkyServer Query: 3-way join to find gravitational lens effect

• Table U: (objId, color) 11.8GB• Table N: (objId, neighborId) 41.8GB• Find neighboring stars with similar colors:

– Join U+N to findT = N.neighborID where U.objID = N.objID, U.color

– Join U+T to findU.objID where U.objID = T.neighborID

and U.color ≈ T.color

Page 38: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

D D

MM 4n

SS 4n

YY

H

n

n

X Xn

U UN N

U U

SkyServer query

u: objid, color

n: objid, neighborobjid

[partition by objid]

select

u.color,n.neighborobjid

from u join n

where

u.objid = n.objid

Page 39: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

D D

MM 4n

SS 4n

YY

H

n

n

X Xn

U UN N

U U

(u.color,n.neighborobjid)

[re-partition by n.neighborobjid]

[order by n.neighborobjid]

[distinct]

[merge outputs]

select

u.objid

from u join <temp>

where

u.objid = <temp>.neighborobjid and

|u.color - <temp>.color| < d

Page 40: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard
Page 41: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Another example: how Dryad optimizes DAG automatically

• Example Application: compute query histogram

• Input: log file (n partitions)

• Extract queries from log partitions

• Re-partition by hash of query (k buckets)

• Compute histogram within each bucket

Page 42: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Naïve histogram topology

Q Q

R

Q

R k

k

k

n

n

is:Each

R

is:

Each

MS

C

P

C

S

C

S

D

P parse lines

D hash distribute

S quicksort

C count occurrences

MS merge sort

Page 43: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Efficient histogram topologyP parse lines

D hash distribute

S quicksort

C count occurrences

MS merge sort

M non-deterministic merge

Q' is:Each

R

is:

Each

MS

C

M

P

C

S

Q'

RR k

T

k

n

T

is:

Each

MS

D

C

Page 44: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

RR

T

Q’

MS►C►D

M►P►S►C

MS►C

P parse lines D hash distribute

S quicksort MS merge sort

C count occurrences M non-deterministic merge

R

Page 45: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

MS►C►D

M►P►S►C

MS►C

P parse lines D hash distribute

S quicksort MS merge sort

C count occurrences M non-deterministic merge

RR

T

R

Q’Q ’Q ’Q ’

Page 46: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

MS►C►D

M►P►S►C

MS►C

P parse lines D hash distribute

S quicksort MS merge sort

C count occurrences M non-deterministic merge

RR

T

R

Q’Q ’Q ’Q ’

T

Page 47: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

MS►C►D

M►P►S►C

MS►C

P parse lines D hash distribute

S quicksort MS merge sort

C count occurrences M non-deterministic merge

RR

T

R

Q’Q ’Q ’Q ’

T

Page 48: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

P parse lines D hash distribute

S quicksort MS merge sort

C count occurrences M non-deterministic merge

MS►C►D

M►P►S►C

MS►C RR

T

R

Q’Q ’Q ’Q ’

T

Page 49: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

P parse lines D hash distribute

S quicksort MS merge sort

C count occurrences M non-deterministic merge

MS►C►D

M►P►S►C

MS►C RR

T

R

Q’Q ’Q ’Q ’

T

Page 50: Distributed Computations MapReduce/Dryad M/R slides adapted from those of Jeff Dean’s Dryad slides adapted from those of Michael Isard

Final histogram refinement

Q' Q'

RR 450

TT 217

450

10,405

99,713

33.4 GB

118 GB

154 GB

10.2 TB

1,800 computers

43,171 vertices

11,072 processes

11.5 minutes