Part 1: Systems Infrastructure forBig-Data Analytics
Ananth Grama
Center for Science of InformationDept. of Computer Science, Purdue University
With help from Giorgos Kollias, Naresh Rapolu, Karthik Kambatla, and Adnan Hassan
Thanks to NSF, DoE, the Center for Science of Information, Microsoft, and Intel.
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 1 / 53
Outline
Motivating applications – PageRank, Graph Alignment
Case study: single mat-vec in MapReduce
Asynchrony and speculation: Optimizations across iterations
Asynchronous algorithms through Relaxed SynchronizationSpeculative parallelism through TransMR (transactionalMapReduce)Locking techniques for efficient distribution transactions
Online Analytics
Motivating applicationsStreams (Storm) and Machine Learning (Vowpal Wabbit)Runtime and optimizations
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 2 / 53
Motivating Example: Functional PageRank (PR)
Computing PageRank (PR)
PageRank as a random surfer process: Start surfing from a randomnode and keep following links with probability µ restarting withprobability 1− µ; the node for restarting will be selected based on apersonalization vector v . The ranking value xi of a node i is theprobability of visiting this node during surfing.
PR can also be cast in power series representation asx = (1− µ)
∑kj=0 µ
jS jv ; S encodes column-stochastic adjacencies.
Functional rankings
A general method to assign ranking values to graph nodes asx =
∑kj=0 ζjS
jv . PR is a functional ranking, ζj = (1− µ)µj .
Terms attenuated by outdegrees in S and damping coefficients ζj .
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 3 / 53
Motivating Example: Graph Matching
Node similarity: Two nodesare similar if they are linked byother similar node pairs. Bypairing similar nodes, the twographs become aligned.
Let A and B be the normalized adjacency matrices of the graphs(normalized by columns), Hij be the independently known similarityscores (preferences matrix) of nodes i ∈ VB and j ∈ VA, and µ bethe fractional contribution of topological similarity.
To compute X , IsoRank iterates:
X ← µBX AT + (1− µ)H
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 4 / 53
MapReduce: Basics
Execute in parallel user-defined functions on individual data-itemsdistributed across machines.
Simple programming model — map and reduce functions
Scalable, distributed execution of these functions on massiveamounts of data on commodity hardware
0DS��
Ɣ ,QSXW���.����9��!Ɣ 2XWSXW��.C���9C��!����
������������.C�Q�9C�Q!
0DS��
Ɣ ,QSXW���.����9��!Ɣ 2XWSXW��.C���9C��!����
������������.C�Q�9C�Q!
0DS��
Ɣ ,QSXW���.����9��!Ɣ 2XWSXW��.C���9C��!����
������������.C�Q�9C�Q!
,17(50(',$7(��NH\��YDOXH�OLVW!��OLVW����
5HGXFH��
Ɣ ,QSXW���.C����9C���OLVW!Ɣ 2XWSXW��.CC���9CC��!�����
�.CC�Q�9CC�Q!
5HGXFH��
Ɣ ,QSXW���.C����9C���OLVW!Ɣ 2XWSXW��.CC���9CC��!�����
�.CC�Q�9CC�Q!
,1387��NH\��YDOXH!��OLVW����
287387�YDOXH��OLVW����
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 5 / 53
MapReduce: DataFlow
VSOLW�� PDS
VSOLW�� PDS
VSOLW�� PDS
VRUW
VRUW
VRUW
UHGXFH SDUW��
UHGXFH SDUW��
PHUJH
PHUJH
LQSXW
RXWSXW
FRS\
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 6 / 53
Example: Shotest Path (mat-vec) in MapReduce
map(node , adjList) {
for each arc in adjList {
output(arc.dst , node.distance + arc.weight)
}
}
reduce(node , newDistList) {
node.distance = min(newDistList)
}
while(not converged) {
runMapReduceJob(map , reduce , Ax)
}
We call this the Naive mat-vec — map takes an adjacency listas input
Pegasus (CMU) implementation reads one edge per map
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 7 / 53
Naive mat-vec : Resource Utilizaton
0
20
40
60
80
100
0 200 400 600 800 1000 1200
% C
PU
Time
CPU Usage
userwaiting
0
5000
10000
15000
20000
25000
30000
35000
0 200 400 600 800 1000 1200
KB
/s
Time
Network I/O
In
Out
0
5000
10000
15000
20000
25000
30000
35000
0 200 400 600 800 1000 1200
KB
s
Time
Disk I/O
Block In
Block Out
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 8 / 53
Optimizing the mat-vec further
0
20
40
60
80
100
map spill merge shuffle reduce
Pe
rce
nta
ge
of
tota
l tim
e
Stages
Stages overlap; hence, sum of percentages > 100
I/O time > Computation time
For performance:
Batch read dataEach map processes more data (as much as can fit in memory)
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 9 / 53
Partitioned mat-vec
Each map operates on a graph partition — a set of adjacencylists
Partition size is constrained by the heap size
On our setup, maximum partition size was 219 nodes
215
220
225
230
235
240
245
250
255
260
265
15.5 16 16.5 17 17.5 18 18.5
Tim
e (
seconds)
Sub-graph size (powers of 2)
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 10 / 53
Performance: Pegasus vs Naive vs Partitioned
16 Amazon EC2 nodes
10
100
1000
10000
23 24 25 26 27
Tim
e (
seconds)
Graph size (powers of 2)
naivepartitioned
pegasus
32 Amazon EC2 nodes
10
100
1000
23 24 25 26 27
Tim
e (
seconds)
Graph size (powers of 2)
naivepartitioned
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 11 / 53
Optimizations Across Iterations
Algorithmic optimizations:
Asynchronous algorithms: Algorithms that allow asynchrony— ordering of updates doesn’t affect the correctness of thealgorithme.g., PageRank, Alignments
Speculative Parallelism: Algorithms where concurrentcomputations can have potential conflicts, the conflicts are rareand can only be detected at runtimee.g., Boruvka’s MST, Single Source Shortest Path
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 12 / 53
Asynchronous Iterations
Improve performance in parallel environments.
Infrequent synchronization reduces communicationExamples
Graph algorithms, Numerical methods, ML kernels, etc.
More pronounced gains in distributed environments
Higher communication and data-movement costsRead once, process multiple times allows more computation forthe same I/O cost
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 13 / 53
Relaxed Synchronization
Synchronize once every few iterations
Approach: Two levels of MapReduceGlobal MapReduce: The regular MapReduce
Requires global synchronization
Local MapReduce: MapReduce within a global map
Each global map runs a few iterations of local MapReducePartial synchronization of data of a single global map task
Input data partitioning for fewer dependencies across partitions
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 14 / 53
PageRanks Using Traditional MapReduce
General PageRank
IEEE CLUSTER, Heraklion, Greece23rd September 2010 10
Synchronization Barrier
MapsReduces
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 15 / 53
PageRank with Relaxed Synchronization
Relaxed PageRank
IEEE CLUSTER, Heraklion, Greece23rd September 2010 12
Synchronization Barrier
MapsReduces
BarrierBarrierBarrier
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 16 / 53
Realizing Relaxed Synchronization Semantics
Code gmap, greduce, lmap, lreduce
lmap, lreduce use EmitLocalIntermediate() and Emit Local()Synchronized hashtables for local storage
gmap(xs : X list) {
while(no-local -convergence -intimated) {
for each element x in xs {
lmap(x); // emits lkey , lval
}
lreduce (); // operates on the output of lmap functions
}
for each value in lreduce -output{
EmitIntermediate(key , value);
}
}
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 17 / 53
Evaluation — Relaxed Synchronization
Sample applications
Single Source Shortest Path (MST, transitive closure, etc.)PageRank (mat-vec: eigen value and linear system solvers)
Experimental Testbed8 Amazon EC2 Large Instances
64-bit compute units with 15 GB RAM, 4x 420 GB storageHadoop 0.20.1; 4 GB heap space per slave
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 18 / 53
Single Source Shortest Path
Single Source Shortest Path
0
50
100
150
100 200 400 800 1600 3200 6400
# Ite
ratio
ns
General SSSP Relaxed SSSP
IEEE CLUSTER, Heraklion, Greece23rd September 2010 19
02000400060008000
100 200 400 800 1600 3200 6400Tim
e (s
econ
ds)
# Partitions
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 19 / 53
PageRank
Input: Partitioned using METIS ( less than 10 seconds)
Damping factor = 0.85
GraphA GraphBNodes 280,000 100,000Edges 3 million 3 million
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 20 / 53
PageRank Performance: GraphA
Performance: Graph A
17
010203040
100 200 400 800 1600 3200 6400
# Ite
ratio
ns
General PageRankRelaxed PageRank
0
1000
2000
3000
4000
100 200 400 800 1600 3200 6400
Tim
e (s
econ
ds)
# Partitions
General PageRankRelaxed PageRank
IEEE CLUSTER, Heraklion, Greece 23rd September 2010
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 21 / 53
PageRank Performance: GraphB
Performance: Graph B
0
10
20
30
40
100 200 400 800 1600 3200 6400
# Ite
ratio
ns
General PageRank
Relaxed PageRank
18
0
1000
2000
3000
4000
100 200 400 800 1600 3200 6400
Tim
e (s
econ
ds)
#Partitions
General PageRank
Relaxed PageRank
IEEE CLUSTER, Heraklion, Greece 23rd September 2010
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 22 / 53
Beyond Data-Parallelism: Speculation
Speculative parallelism
Most of the data can be operated on in parallel.
Some executions conflict. These can only be detected atruntime. [Pingali et.al., PLDI’11]
Online algorithms/ Pipelined workflows
MapReduce Online [Condie’10] is an approach needing havycheckpointing.
Software Transactional Memory (STM)
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 23 / 53
TransMR: Transactional MapReduce
Goal: Exploit speculative data-parallelism
Support data-sharing across concurrent computations to detectand resolve conflicts at runtime
Solution:
Use distributed key-value stores as shared address space acrosscomputationsAddress inconsistencies arising due to the disparatefault-tolerance mechanismsTransactional execution of map and reduce functions
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 24 / 53
TransMR: System Architecture
Distributed key-value store provides a shared-memoryabstraction to the distributed execution-layer.
…
N1 N2 Nn
Distributed
Execution Layer
Distributed
Key-Value Store
…
GS
CU
LS
CU
LS …
GS
CU
LS
CU
LS …
GS
CU
LS
CU
LS
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 25 / 53
Semantics of the API
Data-centric function scope – Map/Reduce/Merge etc, – termedas a Computation Unit (CU), is executed as a transaction.
Optimistic reads and write-buffering. Local Store (LS) forms thewrite-buffer of a CU.
Put (K, V): Write to LS, which is later atomically committed toGS.Get (K, V): Return from LS, if already present; otherwise, fetchfrom GS and store in LS.Other Op: Any thread local operation.
The output of a CU is always committed to the GS before beingvisible to other CUs of the same or different type.
Eliminates the costly shuffle phase of MapReduce.
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 26 / 53
Design Principles
Optimistic concurrency control over pessimistic locking
Locks are acquired at the end of the transaction. Write-bufferand read-set is validated against those of concurrent Trxassuring serializability.Client is potentially executing on the slowest node in thesystem; in this case, pessimistic locking hinders paralleltransaction execution.
Consistency (C) and Tolerance to Network Partitions (P) overAvailability (A) in CAP Theorem for Distributed transactions.
Application correctness mandates strict consistency ofexecution. Relaxed consistency models are application-specificoptimizations.Intermittent non-availability is not too costly forbatch-processing applications, where client is fault-prone initself.
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 27 / 53
Evaluation
We show performance gains on two applications, which arehitherto implemented sequentially without transactional support;both exhibit Optimistic data-parallelism.
Boruka’s MST
Each iteration is coded as a Map function with input as a node.Reduce is an identity function. Conflicting maps are serializedwhile others are executed in parallel.After n iterations of coalescing, we get the MST of an n nodegraph.A graph of 100 thousand nodes, with average degree of 50,generated based on the forest-fire model.
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 28 / 53
Boruvka’s MST
Speedup of 3.73 on 16 nodes, with less than 0.5 % re-executionsdue to aborts.
10
15
20
25
30
35
40
45
50
1 2 4 8 16 0
50
100
150
200
250
Tim
e (
min
s)
# A
bort
s
# Computing Nodes
Execution TimeNumber of Aborts
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 29 / 53
Maximum Flow Using Push-Relabel Algorithm
Each Map function executes a Push or a Relabel operation onthe input node, depending on the constraints on its neighbors.
Push operation increases the flow to a neighboring node andchanges their “Excess”.
Relabel operation increases the height of the input node if it isthe lowest among its neighbors.
Conflicting Maps – operating on neighboring nodes – getserialized due to their transactional nature.
Only sequential implementation possible without support forruntime conflict detection.
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 30 / 53
Maximum flow using Push-Relabel algorithm
Speedup of 4.5 is observed on 16 nodes with 4% re-executionson a windown of 40 iterations.
15
20
25
30
35
40
45
50
55
1 2 4 8 16 0
1000
2000
3000
4000
5000
6000
Tim
e (
min
s)
# A
bort
s
# Computing Nodes
Execution TimeNumber of Aborts
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 31 / 53
TransMR: Intermediate Lessons
TransMR programming model enables data-sharing indata-centric programming models for enhanced applicability.
Similar to other data-centric programming models, theprogrammer only specifies operation on the individualdata-element without concerning about its interaction with otheroperations.
Prototype implementation shows that many importantapplications can be expressed in this model while extractingsignificant performance gains through increased parallelism.
BUT: What about the locking operations!
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 32 / 53
Distributed Transactions on Key-Value Stores
Transactions are costly in a large scale distributed settings
two-phase locking (concurrency control)two-phase commit (atomicity)
Careful examination of the protocols and optimizations crucial toperformance of TransMR-like systems
These optimizations also useful for general purpose transactionson databases using key-value store as the underlying storage
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 33 / 53
Lock Management in Distributed Transactions
Lock management the major bottleneck affecting the latency ofdistributed transactions.
Consider Strong Strict two phase locking (SS2PL) – waitingcase: The lock-acquiring stage is the only sequential stage. Theother stages can be parallelized to finish in a single round-trip.
Holds true even in optimistic-concurrency techniques where thelocks are acquired at the end.
LockRead /
ExecuteUpdate
Unlock /
Commit / Abort
Locking StageStart
l1 l2 ln
Continue Trx ExecutionIf all locks
acquired
Busy waitBusy wait Busy wait
Move to next if lock acquired
Sequential locking on ordered locks
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 34 / 53
Workload Aware Lock Management
Contention based Lock-ordering: Order the locks so as todecrease the total amount of waiting time.
For the waiting case, the lock with the least contention shouldbe acquired first. This increases pipelining while decreasinglock-holding times.
Contention order is a runtime characteristic, and is updatedconsistently. All clients should adhere to the same order to avoiddeadlocks.
l1 l2 l3 l4 l5 l6
Locks ordered based on increasing
or decreasing contention-order
Group 1 Group 2 Group 3
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 35 / 53
Constrained k-way Graph Partitioning
Graph partitioning algorithm to split the locking into knon-overlapping partitions, minimizing the sum of weights oncut-edges, while approximately balancing the total weight (sumof node-weights) of individual partitions.
The result of the partitioning algorithm is theload-balanced-partitioning of locks among k storage nodes.
k1
k4
k2
k6
k5
k3
8
2
4
6 8
9
7
Partition a Partition b
15
10
1010
12
15
Co-occurence countOccurrence count
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 36 / 53
Evaluation
A cluster of 20 machines was used for all evaluations. Eachmachine had a Quad-core xeon processor with 8 GB of RAM.HBase is the underlying key-value store.
The YCSB benchmark was extended with the atomic multi-putoperation. A client transaction involves an atomicRead-Modify-Write operation on a set of keys.
The keys for the atomic operation are generated using a Zipfiangenerator with variable zipfian parameter. Each transactionupdates 15 keys out of 50K keys.
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 37 / 53
Two Phase Locking: Waiting Version
2 4 6 8 10 12 14 16 18 20Number of clients
0
100
200
300
400
500
600
Thro
ughp
ut
z=0.7, order=incz=0.7, order=decz=0.8, order=incz=0.8, order=decz=0.9, order=incz=0.9, order=dec
Figure : Performance of Lock Partitioning
Ordering of keys in increasing-order of their contentionsignificantly better than the decreasing order.The increasing-order reduced lock holding time for highestcontended locks reducing waiting time for other transactions.
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 38 / 53
Two Phase Locking: Waiting Version
2 4 6 8 10 12 14 16 18 20Number of clients
0
100
200
300
400
500
600
Thro
ughp
ut
z=0.7z=0.8z=0.9
Figure : Performance of Lock Partitioning
Partitioning is done using Metis and partitions are placed atseparate nodes. Lock-partitioning improves the throughput byreducing the number of network-roundtrips needed for sequentiallocking by the client.
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 39 / 53
Optimistic Concurrency Control – No-waiting
Version
2 4 6 8 10 12 14 16 18 20Number of clients
0
50
100
150
200
250
300
350
400
450
Thro
ughp
ut
z=0.7, order=incz=0.7, order=decz=0.8, order=incz=0.8, order=decz=0.9, order=incz=0.9, order=dec
Figure : Performance of Lock Ordering
Smaller improvement for OCC mainly due to the shorterduration of locking.At similar contention levels, the throughput of optimisticconcurrency control is significantly lower.
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 40 / 53
Optimistic Concurrency Control – No-waiting
Version
2 4 6 8 10 12 14 16 18 20Number of clients
0
50
100
150
200
250
300
350
400
450
Avg
Lock
s Ac
quire
d Pe
r Trx
z=0.7, order=incz=0.7, order=decz=0.8, order=incz=0.8, order=decz=0.9, order=incz=0.9, order=dec
Figure : Lock wastage due to restarts
Optimistic techniques not suitable at high contention levels asthe time spent in reading and local updating gets wasted due toconflicts during commit.
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 41 / 53
Lock Optimization: Conclusions
The waiting version of SS2PL with increasing-contention-orderand partitioning outperforms the other protocols significantly.
Restarts due to conflicts constitute a major overhead indistributed transactions. Reducing restarts by busy-waiting forlocks is an important step towards increasing performance.
Understanding the workload - even simple statistics oncontention - is enough to achieve significant gains (up to 200%).
Lock-partitioning through graph clustering and partitioningtechniques can be performed dynamically to achieveperformance gains.
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 42 / 53
Towards Online Learning
Figure :
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 43 / 53
Motivating Applications of Online Learning
Ad-Servers (learning from click-streams, spatial parameters)
Real-time sentiment classification (learning from twitter feeds,blogs, etc.)
Read-time content recommendation (correlating tweets,hashtags, etc.)
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 44 / 53
Case Study: Stochastic Gradient Descent
Randomly shuffle training samples
repeat {for i := 1, 2, ..., m {
T j := T j - a(ht(x(i) - y(i)) x(j)
(for every j = 0, 1, ..., n)
}}
Can be used in applications like Supervised Semantic Indexing (e.g.,Minimize loss function F = 1− q∗W+d+ + q∗W+d−, where q is thequery document, W is a weight matrix, d+ corresponds to a positivedocument and d− corresponds to a negative document.)
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 45 / 53
Stream Processing Solutions
Storm is a stream processing envine built at Twitter. It executes aDAG of operators consisting of Spouts and Bolts.
Figure :
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 46 / 53
Building Online Applications: Storm and Vowpal
Wabbit
Figure :
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 47 / 53
Limitations of Simple Integration
Long reduction times, network bandwidth bottlenecks, statictopologies and dynamic system state.
Figure :
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 48 / 53
Limitations of Simple Integration
Impedance mismatch between synchronous reductions andasynchronous execution engine (LMAX Disruptor).
Figure :
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 49 / 53
A dynamically orchestrated online learning
framework
Controller dynamically schedules reductions and forces rate controland routing on input data streams
Figure :
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 50 / 53
Controller Dexign: Asynchronous Reductions
Schedules asynchronous reductions (staggered butterfly) while forcingrate control (blacklisting reducers)
Figure :
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 51 / 53
Controller Design: Dynamic Mapping of Virtual
Overlay to Physical Operators
Controller uses operator metrics to decide on dynamic mapping andschedule reductions. Metrics: Model sparsity triggers new reduction;Latency and throughput of individual operators (scale-in/scale-out)
Figure :
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 52 / 53
Continuing Efforts
TransMR and Concurrency Control Modules available on request.Online analytics framework currently being validated. Available inlimited release.
Ananth Grama (Purdue University) Big Data Analytics Infrastructure 53 / 53