46

Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data
Page 2: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Google|Bing “Daytona Microsoft Research”

Page 3: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data
Page 4: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Raise your hand…

Great, you can help answer questions ;-)

Sit with these people during lunch...

Page 5: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data
Page 6: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

An increased number and variety of data sources that generate large quantities of data Sensors (e.g. location, acoustical, …)

Web 2.0 (e.g. twitter, wikis, … )

Web clicks

Realization that data was “too valuable” to delete

Dramatic decline in the cost of hardware, especially storage If storage was still $100/GB there would be no big data revolution

underway

0

1

1

0

0

0

0

1

0

1

1

1

0

1

0

0

1

0

0

1

0

1

1

1

1

0

0

1

1

0

0

1

0

1

1

1

1

1

1

1

1

0

0

1

0

0

0

1

0

Page 7: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Old guard

Young turks

Use a parallel

database systemeBay – 10PB on 256 nodes

Use a NoSQL system

Facebook - 20PB on 2700 nodes

Bing – 150PB on 40K nodes

Just a subset of the data these companies manage…

Page 8: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

NOSQLWhat’s in the name...

Page 9: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

NO to SQLIt’s not about saying that

SQL should never be used,or that SQL is dead...

Page 10: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

NOT Only SQLIt’s about recognizing

that for some problemsother storage solutions

are better suited!

Would have been more intuitive if named NO JOINS

Page 11: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

More data model flexibility JSON as a data model, simple grammar, maps to ds,…

No “schema first” requirement

Relaxed consistency models such as eventual consistency Willing to trade consistency for availability and speed

See Sebastian Burckhardt’s talk(s)

Low upfront software costs

Never learned anything but C/Java in school Hate declarative languages like SQL

Faster time to insight from data acquisition

Page 12: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

SQL:

NoSQL:

No cleansing!

No ETL!

No load!

Analyze data where it lands!

RDBMS

Data

Arrives

Derive a

schema

Cleanse

the data

Transform

the data

Load

the data

SQL

Queries

1

2

3 4 5

6

Data

Arrives

Application

Program

1 2

NOSQL

System

Page 13: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Key/Value Stores

Examples: MongoDB, CouchBase, Cassandra,

Windows Azure, …

Flexible data model such as JSON

Records “sharded” across nodes in a cluster by

hashing on key

Single record retrievals based on key

Hadoop

Scalable fault tolerant framework for storing and

processing MASSIVE data sets

Typically no data model

Records stored in distributed file system

Page 14: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

14

Relational Databases vs. Hadoop?(what’s the future?)

Relational databases and Hadoop

are designed to meet different needs

Page 15: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Structured Unstructured&

Relational DB Systems

Structured data w/ known schema

ACID

Transactions

SQL

Rigid Consistency Model

ETL

Longer time to insight

Maturity, stability, efficiency

NoSQL Systems

(Un)(Semi)structured data w/o schema

No ACID

No transactions

No SQL

Eventual consistency

No ETL

Faster time to insight

Flexibility

Page 16: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Requirements: Scalable to PBs and 1000s

of nodes

Highly fault tolerant

Simple to program against

Massive amounts of click stream data that had to be stored and analyzed

2006

Hadoop

GFS + MapReducedistributed $fault-tolerant

“new” programming paradigm

Store Process

2003

MR/GFS

Hadoop = HDFS + MapReduce

Store Process

Page 17: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Think data

warehousing

for Big Data

Scalability and a

high degree of

fault tolerance

Ability to quickly analyze massive

collections of records without forcing data to

first be modeled, cleansed, and

loaded

Easy to use programming

paradigm for writing and executing

analysis programs that scale to 1000s

of nodes and PBs of data

Low, up front

software and

hardware costs

Page 18: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

HDFS – distributed, fault tolerant file system

MapReduce – framework for writing/executing distributed, fault tolerant algorithms

Hive & Pig – SQL-like declarative languages

Sqoop – package for moving data between HDFS and relational DB systems

+ Others…

HDFS

Map/

Reduce

Hive & Pig

Sqoop

Zo

ok

ee

pe

r

Avro

(S

eri

aliza

tio

n)

HBase

ETL

Tools

BI

ReportingRDBMS

Page 19: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Underpinnings of the entire Hadoop ecosystem

HDFS design goals: Scalable to 1000s of nodes

Assume failures (hardware and software) are common

Targeted towards small numbers of very large files

Write once, read multiple times

Traditional hierarchical file organization of directories and files

Highly portable

HDFS

Map/

Reduce

Hive & Pig

Sqoop

Page 20: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Large File1100101010011100101010011100101010011100101010011100110010101001110010101001110010101001110010101001110010101001

1100101010011100101010011100101010011100101010011100110010101001110010101001110010101001110010101001110010101001

1100101010011100101010011100101010011100101010011100110010101001110010101001110010101001110010101001110010101001

1100101010011100101010011100101010011100101010011100110010101001110010101001110010101001110010101001110010101001

1100101010011100101010011100101010011100101010011100110010101001110010101001110010101001110010101001110010101001

1100101010011100101010011100101010011100101010011100110010101001110010101001110010101001110010101001110010101001

1100101010011100101010011100101010011100101010011100110010101001110010101001110010101001110010101001110010101001

1100101010011100101010011100101010011100101010011100110010101001110010101001110010101001110010101001110010101001

6440MB

Block

1

Block

2

Block

3

Block

4

Block

5

Block

6

Block

100

Block

101

64MB 64MB 64MB 64MB 64MB 64MB

64MB 40MB

Block

1

Block

2

Let’s color-code them

Block

3

Block

4

Block

5

Block

6

Block

100

Block

101

e.g., Block Size = 64MB

Files are composed of set of blocks

• Typically 64MB in size

• Each block is stored as a separate file in

the local file system (e.g. NTFS)

Page 21: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Default placement policy: First copy is written to the node creating the file (write affinity)

Second copy is written to a data node within the same rack(to minimize cross-rack network traffic)

Third copy is written to a data node in a different rack(to tolerate switch failures)

Node 5Node 4Node 3Node 2Node 1

Block

1

Block

3

Block

2

Block

1

Block

3

Block

2

Block

3

Block

2

Block

1

e.g., Replication factor = 3

Page 22: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

NameNode – one instance per cluster Responsible for filesystem metadata operations on cluster,

replication and locations of file blocks

Backup Node – responsible for backup of NameNode

DataNodes – one instance on each node of the cluster Responsible for storage of file blocks

Serving actual file data to client

NameNode Master

BackupNodeCheckpointNode or (backups)

DataNodeDataNodeDataNode

DataNodeSlaves

Page 23: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

NameNode BackupNode

DataNode DataNode DataNode DataNode DataNode

(heartbeat, balancing, replication, etc.)

nodes write to local disk

namespace backups

Page 24: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

DataNode DataNode DataNode DataNode DataNode

NameNode BackupNode

Giant File110010101001

010100101010

011001010100

101010010101

001100101010

010101001010

100110010101

001010100101

HDFS

Client

{node1, node2, node 3}

(based on “replication factor”) (3 by default)

Client transfers block directly to assigned data nodes

{node2, node4, node 5}{node1, node3, node 5}{node2, node3, node 4}

and so on…

Name node tells client where to store each

block of the file

Page 25: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

NameNode BackupNode

Giant File110010101001

010100101010

011001010100

101010010101

001100101010

010101001010

100110010101

001010100101

HDFS

Client return locations of blocks of file

DataNode DataNode DataNode DataNode DataNode

stream blocks from data nodes

Page 26: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Failure types:

Disk errors and failures

DataNode failures

Switch/Rack failures

NameNode failures

Datacenter failures

HDFS was designed with the expectation that failures (both hardware and software) would occur frequently

NameNode

DataNode

Page 27: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

NameNode BackupNode

DataNode DataNode DataNode DataNode DataNode

NameNode detects DataNode lossBlocks are auto-replicated on remaining nodes to satisfy replication factor

DataNodeDataNode DataNode

Page 28: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

NameNode BackupNode

DataNode DataNode

NameNode detects new DataNodeis added to cluster

DataNodeDataNode DataNode

Blocks are re-balanced and re-distributed

DataNode DataNodeDataNode

Page 29: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Highly scalable 1000s of nodes and massive (100s of TB) files

Large block sizes to maximize sequential I/O performance

No use of mirroring or RAID. Reduce cost

Use one mechanism (triply replicated blocks) to deal with a wide variety of failure types rather than multiple different mechanisms

Negatives Block locations and record placement is invisible to

higher level software Makes it impossible to employ many optimizations successfully employed by parallel DB systems

Why?

Page 30: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

HDFS – distributed, fault tolerant file system

MapReduce – framework for writing/executing distributed, fault tolerant algorithms

Hive & Pig – SQL-like declarative languages

Sqoop – package for moving data between HDFS and relational DB systems

+ Others…

HDFS

Hive &

Pig

Sqoop

Zo

ok

ee

pe

r

Avro

(S

eri

aliza

tio

n)

ETL

Tools

BI

ReportingRDBMS

Map/

Reduce

Page 31: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Programming framework (library and runtime) for analyzing data sets stored in HDFS

MapReduce jobs are composed of two functions:

User only writes the Map and Reduce functions

MR framework provides all the “glue” and coordinates the execution of the Map and Reduce jobs on the cluster. Fault tolerant

Scalable

map() reduce()sub-divide &

conquer

combine & reduce cardinality

Page 32: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Essentially, it’s…

1. Take a large problem and divide it into sub-problems

2. Perform the same function on all sub-problems

3. Combine the output from all sub-problems

DoWork() DoWork() DoWork()…

Output

MA

PR

ED

UC

E

Page 33: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

<keyA, valuea>

<keyB, valueb>

<keyC, valuec>

<keyA, valuea>

<keyB, valueb>

<keyC, valuec>

<keyA, valuea>

<keyB, valueb>

<keyC, valuec>

<keyA, valuea>

<keyB, valueb>

<keyC, valuec>

Output

Reducer

<keyA, list(valuea, valueb, valuec, …)>

Reducer

<keyB, list(valuea, valueb, valuec, …)>

Reducer

<keyC, list(valuea, valueb, valuec, …)>

Sort

and

group

by

key

DataNode

DataNode

DataNode

Mapper<keyi, valuei>

Mapper<keyi, valuei>

Mapper<keyi, valuei>

Mapper<keyi, valuei>

Page 34: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

MapReduce

Layer

HDFS

Layer

hadoop-namenode

hadoop-

datanode1

hadoop-

datanode2

hadoop-

datanode3

hadoop-

datanode4

JobTracker

TaskTracker TaskTracker TaskTracker TaskTracker

Temporary data stored to local file system

JobTracker controls and heartbeats TaskTracker nodes

TaskTrackers store temp data

Master

Slaves

- Coordinates all M/R tasks & events - Manages job queues and scheduling - Maintains and Controls TaskTrackers- Moves/restarts map/reduce tasks if needed - Uses “checkpointing” to combat single points

of failure

Execute individual map and reduce tasks as assigned by JobTracker (in separate JVM)

DataNode DataNode DataNode DataNode

NameNode

MapReduce

Layer

HDFS

Layer

Page 35: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

JobTracker

TaskTracker TaskTracker TaskTracker TaskTracker

Temporary data stored to local file system

map()’s are assigned to TaskTrackers(HDFS DataNode locality aware)

Submit jobs to JobTracker

MR

Clientjobs get queued

Mapper Mapper Mapper Mapper

mappers spawned in separate JVM

and executemappers store temp results

reduce phase begins

Reducer Reducer Reducer Reducer

Page 36: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Map tasks

53705 $65

53705 $30

53705 $15

54235 $75

54235 $22

02115 $15

02115 $15

44313 $10

44313 $25

44313 $55

5 53705 $15

6 44313 $10

5 53705 $65

0 54235 $22

9 02115 $15

6 44313 $25

3 10025 $95

8 44313 $55

2 53705 $30

1 02115 $15

4 54235 $75

7 10025 $60

Mapper

Mapper

4 54235 $75

7 10025 $60

2 53705 $30

1 02115 $15

10025 $60

5 53705 $65

0 54235 $22

5 53705 $15

6 44313 $10

3 10025 $95

8 44313 $55

9 02115 $15

6 44313 $25

10025 $95

Get sum sales grouped by zipCodeD

ata

No

de

3D

ata

No

de

2D

ata

No

de

1

Blocks

of the

Sales

file in

HDFS

Group

By

Group

By

(custId, zipCode, amount)

One output

bucket per

reduce task

Page 37: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Reducer

Reducer

Reduce

tasks

Reducer

53705 $65

54235 $75

54235 $22

10025 $95

44313 $55

10025 $60

Map

per

53705 $30

53705 $15

02115 $15

02115 $15

44313 $10

44313 $25

Map

per

53705 $65

53705 $30

53705 $15

44313 $10

44313 $25

10025 $95

44313 $55

10025 $60

54235 $75

54235 $22

02115 $15

02115 $15

Sort

Sort

Sort

53705 $65

53705 $30

53705 $15

44313 $10

44313 $25

44313 $55

10025 $95

10025 $60

54235 $75

54235 $22

02115 $15

02115 $15

SUM

SUM

SUM

10025 $155

44313 $90

53705 $110

54235 $97

02115 $30

Sh

uff

le

Page 38: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Actual number of Map tasks M is generally made much larger than the number of nodes used.

Why? Helps deal with data skew and failure

Example: Say M = 10,000 and

W = 100 (W is number of Map workers)

Each worker does (10,000 / 100) = 100 Map tasks

If it suffers from skew or fails the uncompleted work can

easily be shifted to another worker

Skew with Reducers is still a problem Example: In a query = “get sales by zipcodes”,

some zipCodes (e.g. 10025) may have many more sales records than others

Page 39: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Like HDFS, MapReduce framework designed to

be highly fault tolerant

Worker (Map or Reduce) failures

Detected by periodic Master pings

Map or Reduce jobs that fail are reset and then given

to a different node

If a node failure occurs after the Map job has

completed, the job is redone and all Reduce jobs are

notified

Master failure (Hadoop 2.1.0-beta 8/15/2013)

If the master fails there is now failover

Page 40: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Highly fault tolerant

Relatively easy to write “arbitrary” distributed computations over very large amounts of data

MR framework removes burden of dealing with failures from programmer

Schema embedded in application code

A lack of shared schema Makes sharing data

between applications difficult

Makes lots of DBMS “goodies” such as indices, integrity constraints, views, … impossible

No declarative query language

Page 41: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

FlumeJava

Pace of cloud-scale data processing available in the

open source community is accelerating

2004 2005 2007 2008 2009 2010 2011 20122006

Data Analytics

Workflow

Management

Storage and

File Systems

Data Processing

Giraph

Pig

20032002

ZooKeeper

Crunch

MapReduce

BigTable

Chubby

Dremel & Pregel

Voldemort

Azkaban

GFS

Big Top

Protocol

Buffers

Colossus

Spanner

Page 42: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

HBase saw a sharp rise in it’s code base due, in large

part, to contributions from Cloudera, Twitter, and

Facebook.

Lines of

code

Note: Number of adopters is non-exhaustive

3 4 4 16 19 20

New

Committers

Cumulative

Committers

Represents one

committer

5+

100+

Adopters

570K

150KLines of

Code

&

# of

Adopters

2007 2010 2011 20122008 2009

Page 43: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Increased integration with Hadoop helped adoption

of Cassandra grow from 20+ to 168+ companies

between 2011 and 2012.

Lines of

code

2009 2010 2011 2012

190K

20+

Note: Number of adopters is non-exhaustive

6 6 6 12

Lines of

Code

&

# of

Adopters

New

Committers

Cumulative

Committers

75K

50K

210K

168+

Adopters

Represents one

committer

Page 44: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

Apache Mahout is driving a new era of machine

learning analytics.

Lines of

code

20112008 201220102009

3 6 11 13

Represents one

contributor

14

0K50K

150K

250K

300K

26+

Adopters

New

Committers

Cumulative

Committers

Lines of

Code

&

# of

Adopters

Page 45: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

At LinkedIn in product development and operations.

People You May Know

Who’s Viewed My Profile

Career Explorer

New daily relationships

120B

82 Hadoop jobs

16TBIntermediate data

~5Weekly test algorithms

What it takes…

Page 46: Parallel DB 101 - LASER Foundationlaser.inf.ethz.ch/2013/material/barga/BargaL3 Big Data.pdf · An increased number and variety of data sources that generate large quantities of data

46