Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
Large-scale Graph Analysis inData-Centric Models like MapReduce
Ananth Grama
Center for Science of Information andDept. of Computer Science,
Purdue University
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 1 / 44
Outline
Motivating Application(s)
Analyses of web-scale graph-structured datasetsGraph algorithms and (series of) matrix-vector productsmat-vec implementations in MapReduce
Performance Considerations
Asynchronous algorithms through Relaxed SynchronizationSpeculative parallelism through TransMR (transactionalMapReduce)Locking techniques for efficient distribution transactions
Future Work
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 2 / 44
Large-scale graph analysis
Dataset characteristics
Massive distributed graph-structured datasetsGraphs with billions of nodes, running into petabytes of storage(e.g., web graphs and social networks)
Application characteristics
Most graph algorithms can be modeled as a series ofmatrix-vector products (mat-vecs)e.g., PageRank, Shortest-path problems, etc.Each mat-vec requires distributed executionAlgorithmic efficiency achieved through asynchrony andamorphous data-parallelism
MapReduce for scalable, distributed execution of each mat-vec
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 3 / 44
Single Source Shortest Path problem
Input: Adjacency matrix – A; and the source vertex – u.
Let x be the distance (from u) vector.
A =
∞ 2 82 ∞ 38 3 ∞
x =
0∞∞
Single Source Shortest Path can be computed as:
Iterations of mat-vecs until the resultant vector converges
In each mat-vec , each element aij is computed as:
aij = min∀j(aij + xj)
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 4 / 44
MapReduce
Executes user-defined functions on individual data-items distributedacross several machines in parallel.
Simple programming model — map and reduce functions
Scalable, distributed execution of these functions on massiveamounts of data on commodity hardware
0DS��
Ɣ ,QSXW���.����9��!Ɣ 2XWSXW��.C���9C��!����
������������.C�Q�9C�Q!
0DS��
Ɣ ,QSXW���.����9��!Ɣ 2XWSXW��.C���9C��!����
������������.C�Q�9C�Q!
0DS��
Ɣ ,QSXW���.����9��!Ɣ 2XWSXW��.C���9C��!����
������������.C�Q�9C�Q!
,17(50(',$7(��NH\��YDOXH�OLVW!��OLVW����
5HGXFH��
Ɣ ,QSXW���.C����9C���OLVW!Ɣ 2XWSXW��.CC���9CC��!�����
�.CC�Q�9CC�Q!
5HGXFH��
Ɣ ,QSXW���.C����9C���OLVW!Ɣ 2XWSXW��.CC���9CC��!�����
�.CC�Q�9CC�Q!
,1387��NH\��YDOXH!��OLVW����
287387�YDOXH��OLVW����
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 5 / 44
MapReduce: DataFlow
VSOLW�� PDS
VSOLW�� PDS
VSOLW�� PDS
VRUW
VRUW
VRUW
UHGXFH SDUW��
UHGXFH SDUW��
PHUJH
PHUJH
LQSXW
RXWSXW
FRS\
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 6 / 44
Shotest Path (Iterative mat-vec) in MapReduce
map(node , adjList) {
for each arc in adjList {
output(arc.dst , node.distance + arc.weight)
}
}
reduce(node , newDistList) {
node.distance = min(newDistList)
}
while(not converged) {
runMapReduceJob(map , reduce , Ax)
}
We call this the Naive mat-vec — map takes an adjacency listas input
Pegasus (CMU) implementation reads one edge per map
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 7 / 44
Naive mat-vec : Resource Utilizaton
0
20
40
60
80
100
0 200 400 600 800 1000 1200
% C
PU
Time
CPU Usage
userwaiting
0
5000
10000
15000
20000
25000
30000
35000
0 200 400 600 800 1000 1200
KB
/s
Time
Network I/O
In
Out
0
5000
10000
15000
20000
25000
30000
35000
0 200 400 600 800 1000 1200
KB
s
Time
Disk I/O
Block In
Block Out
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 8 / 44
Optimizing the mat-vec
0
20
40
60
80
100
map spill merge shuffle reduce
Pe
rce
nta
ge
of
tota
l tim
e
Stages
Stages overlap; hence, sum of percentages > 100
I/O time > Computation time
For performance:
Batch read dataEach map processes more data (as much as can fit in memory)
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 9 / 44
Partitioned mat-vec
Each map operates on a graph partition — a bunch of adjacencylists
Partition size is constrained by the heap size
On our setup, maximum partition size was 219 nodes
215
220
225
230
235
240
245
250
255
260
265
15.5 16 16.5 17 17.5 18 18.5
Tim
e (
seconds)
Sub-graph size (powers of 2)
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 10 / 44
Pegasus vs Naive vs Partitioned – 1
Partition-size = 218
4 Amazon EC2 nodes
100
1000
10000
23 24 25
Tim
e (
seconds)
Graph size (powers of 2)
naivepartitioned
pegasus
8 Amazon EC2 nodes
100
1000
10000
23 24 25 26
Tim
e (
seconds)
Graph size (powers of 2)
naivepartitioned
pegasus
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 11 / 44
Pegasus vs Naive vs Partitioned – 2
16 Amazon EC2 nodes
10
100
1000
10000
23 24 25 26 27
Tim
e (
seconds)
Graph size (powers of 2)
naivepartitioned
pegasus
32 Amazon EC2 nodes
10
100
1000
23 24 25 26 27
Tim
e (
seconds)
Graph size (powers of 2)
naivepartitioned
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 12 / 44
Optimizations across iterations
Algorithmic optimizations:
Asynchronous algorithms: Algorithms that allow asynchrony— ordering of updates doesn’t affect the correctness of thealgorithme.g., Single Source Shortest Path, PageRank, Network Alignment
Amorphous data-parallelism: Algorithms where concurrentcomputations can have potential conflicts, the conflicts are rareand can only be detected at runtimee.g., Boruvka’s MST, Single Source Shortest Path
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 13 / 44
Asynchronous (Iterative) Algorithms
Improve performance in parallel environments.
Infrequent synchronization reduces communicationExamples
Graph algorithms, Numerical methods, Classification, etc.
More pronounced gains in distributed environments
Higher communication and data-movement costsRead once, process multiple times allows more computation forthe same I/O cost
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 14 / 44
Proposal: Relaxed Synchronization
Goal: Synchronize once every few iterations
Approach: Two levels of MapReduceGlobal MapReduce: The regular MapReduce
Requires global synchronization
Local MapReduce: MapReduce within a global map
Each global map runs a few iterations of local MapReducePartial synchronization of data of a single global map task
Input data partitioning for fewer dependencies across partitions
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 15 / 44
Shortest Path using traditional MapReduce
General PageRank
IEEE CLUSTER, Heraklion, Greece23rd September 2010 10
Synchronization Barrier
MapsReduces
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 16 / 44
Shortest Path with Relaxed Synchronization
Relaxed PageRank
IEEE CLUSTER, Heraklion, Greece23rd September 2010 12
Synchronization Barrier
MapsReduces
BarrierBarrierBarrier
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 17 / 44
Relaxed Synchronization: PL Semantics
Semantics: Iterative MapReduce
IEEE CLUSTER, Heraklion, Greece23rd September 2010 13
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 18 / 44
Realizing the semantics
Code gmap, greduce, lmap, lreduce
lmap, lreduce use EmitLocalIntermediate() and Emit Local()Synchronized hashtables for local storage
gmap(xs : X list) {
while(no-local -convergence -intimated) {
for each element x in xs {
lmap(x); // emits lkey , lval
}
lreduce (); // operates on the output of lmap functions
}
for each value in lreduce -output{
EmitIntermediate(key , value);
}
}
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 19 / 44
Evaluation — Relaxed Synchronization
3 applications
Single Source Shortest Path (MST, transitive closure, etc.)PageRank (mat-vec: eigen value and linear system solvers)
Test bed8 Amazon EC2 Large Instances
64-bit compute units with 15 GB RAM, 4x 420 GB storageHadoop 0.20.1; 4 GB heap space per slave
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 20 / 44
Single Source Shortest Path
Single Source Shortest Path
0
50
100
150
100 200 400 800 1600 3200 6400
# Ite
ratio
ns
General SSSP Relaxed SSSP
IEEE CLUSTER, Heraklion, Greece23rd September 2010 19
02000400060008000
100 200 400 800 1600 3200 6400Tim
e (s
econ
ds)
# Partitions
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 21 / 44
PageRank
Input: Partitioned using METIS ( less than 10 seconds)
Damping factor = 0.85
GraphA GraphBNodes 280,000 100,000Edges 3 million 3 million
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 22 / 44
PageRank Performance: GraphA
Performance: Graph A
17
010203040
100 200 400 800 1600 3200 6400
# Ite
ratio
ns
General PageRankRelaxed PageRank
0
1000
2000
3000
4000
100 200 400 800 1600 3200 6400
Tim
e (s
econ
ds)
# Partitions
General PageRankRelaxed PageRank
IEEE CLUSTER, Heraklion, Greece 23rd September 2010
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 23 / 44
PageRank Performance: GraphB
Performance: Graph B
0
10
20
30
40
100 200 400 800 1600 3200 6400
# Ite
ratio
ns
General PageRank
Relaxed PageRank
18
0
1000
2000
3000
4000
100 200 400 800 1600 3200 6400
Tim
e (s
econ
ds)
#Partitions
General PageRank
Relaxed PageRank
IEEE CLUSTER, Heraklion, Greece 23rd September 2010
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 24 / 44
Beyond Data-Parallelism
Amorphous data-parallelism
Most of the data can be operated on in parallel.Some of them conflict, which can only be detected at runtime.
“The Tao of Parallelism”, Pingali et.al., PLDI’11The Galois System
Online algorithms/ Pipelined workflows
MapReduce Online [Condie’10] is an approach needing heavycheckpointing.
Software Transactional Memory (STM)
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 25 / 44
TransMR: Transactional MapReduce
Goal: Exploit amorphous data-parallelism
Support data-sharing across concurrent computations to detectand resolve conflicts at runtime
Solution:
Use distributed key-value stores as shared address space acrosscomputationsAddress inconsistencies arising due to the disparatefault-tolerance mechanismsTransactional execution of map and reduce functions
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 26 / 44
TransMR: System Architecture
Distributed key-value store provides a shared-memoryabstraction to the distributed execution-layer.
…
N1 N2 Nn
Distributed
Execution Layer
Distributed
Key-Value Store
…
GS
CU
LS
CU
LS …
GS
CU
LS
CU
LS …
GS
CU
LS
CU
LS
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 27 / 44
Semantics of the API
Data-centric function scope – Map/Reduce/Merge etc, – termedas a Computation Unit (CU), is executed as a transaction.
Optimistic reads and write-buffering. Local Store (LS) forms thewrite-buffer of a CU.
Put (K, V): Write to LS which is later atomically committed toGS.Get (K, V): Return from LS, if already present; otherwise, fetchfrom GS and store in LS.Other Op: Any thread local operation.
The output of a CU is always committed to the GS before beingvisible to other CUs of the same or different type.
Eliminates the costly shuffle phase of MapReduce.
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 28 / 44
Design Principles
Optimistic concurrency control over pessimistic locking
Locks are acquired at the end of the transaction. Write-bufferand read-set is validated against those of concurrent Trxassuring serializability.Client is potentially executing on the slowest node in thesystem; in this case, pessimistic locking hinders paralleltransaction execution.
Consistency (C) and Tolerance to Network Partitions (P) overAvailability (A) in CAP Theorem for Distributed transactions.
Application correctness mandates strict consistency ofexecution. Relaxed consistency models are application-specificoptimizations.Intermittent non-availability is not too costly forbatch-processing applications, where client is fault-prone initself.
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 29 / 44
Evaluation
We show performance gains on two applications, which arehitherto implemented sequentially without transactional support;both exhibit Optimistic data-parallelism.
Boruka’s MST
Each iteration is coded as a Map function with input as a node.Reduce is an identity function. Conflicting maps are serializedwhile others are executed in parallel.After n iterations of coalescing, we get the MST of an n nodegraph.A graph of 100 thousand nodes, with average degree of 50,generated based on the forest-fire model.
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 30 / 44
Boruvka’s MST
Speedup of 3.73 on 16 nodes, with less than 0.5 % re-executionsdue to aborts.
10
15
20
25
30
35
40
45
50
1 2 4 8 16 0
50
100
150
200
250
Tim
e (
min
s)
# A
bort
s
# Computing Nodes
Execution TimeNumber of Aborts
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 31 / 44
Maximum flow using Push-Relabel algorithm
Each Map function executes a Push or a Relabel operation onthe input node, depending on the constraints on its neighbors.
Push operation increases the flow to a neighboring node andchanges their “Excess”.
Relabel operation increases the height of the input node if it isthe lowest among its neighbors.
Conflicting Maps – operating on neighboring nodes – getserialized due to their transactional nature.
Only sequential implementation possible without support forruntime conflict detection.
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 32 / 44
Maximum flow using Push-Relabel algorithm
Speedup of 4.5 is observed on 16 nodes with 4% re-executionson a windown of 40 iterations.
15
20
25
30
35
40
45
50
55
1 2 4 8 16 0
1000
2000
3000
4000
5000
6000
Tim
e (
min
s)
# A
bort
s
# Computing Nodes
Execution TimeNumber of Aborts
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 33 / 44
TransMR: Conclusions
TransMR programming model enables data-sharing indata-centric programming models for enhanced applicability.
Similar to other data-centric programming models, theprogrammer only specifies operation on the individualdata-element without concerning about its interaction with otheroperations.
Prototype implementation shows that many importantapplications can be expressed in this model while extractingsignificant performance gains through increased parallelism.
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 34 / 44
Distributed Transactions on Key-Value Stores
Transactions are costly in a large scale distributed settings
two-phase locking (concurrency control)two-phase commit (atomicity)
Careful examination of the protocols and optimizations crucial toperformance of TransMR-like systems
These optimizations also useful for general purpose transactionson databases using key-value store as the underlying storage
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 35 / 44
Lock management in distributed transactions
Lock management the major bottleneck affecting the latency ofdistributed transactions.
Consider Strong Strict two phase locking (SS2PL) – waitingcase: The lock-acquiring stage is the only sequential stage. Theother stages can be parallelized to finish in a single round-trip.
Holds true even in optimistic-concurrency techniques where thelocks are acquired at the end.
LockRead /
ExecuteUpdate
Unlock /
Commit / Abort
Locking StageStart
l1 l2 ln
Continue Trx ExecutionIf all locks
acquired
Busy waitBusy wait Busy wait
Move to next if lock acquired
Sequential locking on ordered locks
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 36 / 44
Workload aware lock management
Contention based Lock-ordering: Order the locks so as todecrease the total amount of waiting time.
For the waiting case, the lock with the least contention shouldbe acquired first. This increases pipelining while decreasinglock-holding times.
Contention order is a runtime characteristic, and is updatedconsistently. All clients should adhere to the same order to avoiddeadlocks.
l1 l2 l3 l4 l5 l6
Locks ordered based on increasing
or decreasing contention-order
Group 1 Group 2 Group 3
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 37 / 44
Constrained k-way graph partitioning
Graph partitioning algorithm to split the locking into knon-overlapping partitions, minimizing the sum of weights oncut-edges, while approximately balancing the total weight (sumof node-weights) of individual partitions.
The result of the partitioning algorithm is theload-balanced-partitioning of locks among k storage nodes.
k1
k4
k2
k6
k5
k3
8
2
4
6 8
9
7
Partition a Partition b
15
10
1010
12
15
Co-occurence countOccurrence count
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 38 / 44
Evaluation
A cluster of 20 machines was used for all evaluations. Eachmachine had a Quad-core xeon processor with 8 GB of RAM.HBase is the underlying key-value store.
The YCSB benchmark was extended with the atomic multi-putoperation. A client transaction involves an atomicRead-Modify-Write operation on a set of keys.
The keys for the atomic operation are generated using a Zipfiangenerator with variable zipfian parameter. Each transactionupdates 15 keys out of 50K keys.
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 39 / 44
Two phase locking: waiting version
2 4 6 8 10 12 14 16 18 20Number of clients
0
100
200
300
400
500
600
Thro
ughp
ut
z=0.7, order=incz=0.7, order=decz=0.8, order=incz=0.8, order=decz=0.9, order=incz=0.9, order=dec
Figure: Performance of Lock Ordering
Ordering of keys in increasing-order of their contentionsignificantly better than the decreasing order.The increasing-order reduced lock holding time for highestcontended locks reducing waiting time for other transactions.
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 40 / 44
Two phase locking: waiting version
2 4 6 8 10 12 14 16 18 20Number of clients
0
100
200
300
400
500
600
Thro
ughp
ut
z=0.7z=0.8z=0.9
Figure: Performance of Lock Partitioning
Partitioning is done using Metis and partitions are placed atseparate nodes. Lock-partitioning improves the throughput byreducing the number of network-roundtrips needed for sequentiallocking by the client.
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 41 / 44
Optimistic concurrency control – no-waiting version
2 4 6 8 10 12 14 16 18 20Number of clients
0
50
100
150
200
250
300
350
400
450
Thro
ughp
ut
z=0.7, order=incz=0.7, order=decz=0.8, order=incz=0.8, order=decz=0.9, order=incz=0.9, order=dec
Figure: Performance of Lock Ordering
Smaller improvement for OCC mainly due to the shorterduration of locking.At similar contention levels, the throughput of optimisticconcurrency control is significantly lower.
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 42 / 44
Optimistic concurrency control – no-waiting version
2 4 6 8 10 12 14 16 18 20Number of clients
0
50
100
150
200
250
300
350
400
450
Avg
Lock
s Ac
quire
d Pe
r Trx
z=0.7, order=incz=0.7, order=decz=0.8, order=incz=0.8, order=decz=0.9, order=incz=0.9, order=dec
Figure: Lock wastage due to restarts
Optimistic techniques not suitable at high contention levels asthe time spent in reading and local updating gets wasted due toconflicts during commit.
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 43 / 44
Lock Optimization: Conclusions
The waiting version of SS2PL with increasing-contention-orderand partitioning outperforms the other protocols significantly.
locks is an important step towards increasing performance.
Understanding the workload - even simple statistics oncontention - is enough to achieve significant gains (up to 200%).
Lock-partitioning through graph clustering and partitioningtechniques can be performed dynamically to achieveperformance gains.
Ananth Grama (Purdue University) Large-scale Graph Analysis in MapReduce 44 / 44