91
What's new with Apache Spark? Amsterdam Apache Spark Meetup 2014-11-24 meetup.com/Amsterdam-Spark/events/207220772/ Paco Nathan @pacoid

What's new with Apache Spark?

Embed Size (px)

Citation preview

Page 1: What's new with Apache Spark?

What's new with Apache Spark?

!

Amsterdam Apache Spark Meetup 2014-11-24 meetup.com/Amsterdam-Spark/events/207220772/

Paco Nathan @pacoid

Page 2: What's new with Apache Spark?

What is Spark?

2

Page 3: What's new with Apache Spark?

Developed in 2009 at UC Berkeley AMPLab, then open sourced in 2010, Spark has since become one of the largest OSS communities in big data, with over 200 contributors in 50+ organizations

What is Spark?

spark.apache.org

“Organizations that are looking at big data challenges – including collection, ETL, storage, exploration and analytics – should consider Spark for its in-memory performance and the breadth of its model. It supports advanced analytics solutions on Hadoop clusters, including the iterative model required for machine learning and graph analysis.”

Gartner, Advanced Analytics and Data Science (2014)

3

Page 4: What's new with Apache Spark?

What is Spark?

4

Page 5: What's new with Apache Spark?

Spark Core is the general execution engine for the Spark platform that other functionality is built atop: !• in-memory computing capabilities deliver speed

• general execution model supports wide variety of use cases

• ease of development – native APIs in Java, Scala, Python (+ SQL, Clojure, R)

What is Spark?

5

Page 6: What's new with Apache Spark?

What is Spark?

WordCount in 3 lines of Spark

WordCount in 50+ lines of Java MR

6

Page 7: What's new with Apache Spark?

Sustained exponential growth, as one of the most active Apache projects ohloh.net/orgs/apache

What is Spark?

7

Page 8: What's new with Apache Spark?

databricks.com/blog/2014/11/05/spark-officially-sets-a-new-record-in-large-scale-sorting.html

TL;DR: Smashing The Previous Petabyte Sort Record

8

Page 9: What's new with Apache Spark?

A Brief History

9

Page 10: What's new with Apache Spark?

Theory, Eight Decades Ago: what can be computed?

Haskell Curry haskell.org

Alonso Churchwikipedia.org

A Brief History: Functional Programming for Big Data

John Backusacm.org

David Turnerwikipedia.org

Praxis, Four Decades Ago: algebra for applicative systems

Pattie MaesMIT Media Lab

Reality, Two Decades Ago: machine data from web apps

10

Page 11: What's new with Apache Spark?

A Brief History: Functional Programming for Big Data

circa late 1990s: explosive growth e-commerce and machine data implied that workloads could not fit on a single computer anymore…

notable firms led the shift to horizontal scale-out on clusters of commodity hardware, especially for machine learning use cases at scale

11

Page 12: What's new with Apache Spark?

RDBMS

SQL Queryresult sets

recommenders+

classifiersWeb Apps

customertransactions

AlgorithmicModeling

Logs

eventhistory

aggregation

dashboards

Product

EngineeringUX

Stakeholder Customers

DW ETL

Middleware

servletsmodels

12

Page 13: What's new with Apache Spark?

Amazon “Early Amazon: Splitting the website” – Greg Linden glinden.blogspot.com/2006/02/early-amazon-splitting-website.html !eBay “The eBay Architecture” – Randy Shoup, Dan Pritchett addsimplicity.com/adding_simplicity_an_engi/2006/11/you_scaled_your.html addsimplicity.com.nyud.net:8080/downloads/eBaySDForum2006-11-29.pdf !Inktomi (YHOO Search) “Inktomi’s Wild Ride” – Erik Brewer (0:05:31 ff) youtu.be/E91oEn1bnXM !Google “Underneath the Covers at Google” – Jeff Dean (0:06:54 ff) youtu.be/qsan-GQaeyk perspectives.mvdirona.com/2008/06/11/JeffDeanOnGoogleInfrastructure.aspx !MIT Media Lab “Social Information Filtering for Music Recommendation” – Pattie Maes pubs.media.mit.edu/pubs/papers/32paper.ps ted.com/speakers/pattie_maes.html 13

Page 14: What's new with Apache Spark?

A Brief History: Functional Programming for Big Data

circa 2002: mitigate risk of large distributed workloads lost due to disk failures on commodity hardware…

Google File System Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung research.google.com/archive/gfs.html !MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean, Sanjay Ghemawat research.google.com/archive/mapreduce.html

14

Page 15: What's new with Apache Spark?

TL;DR: Generational trade-offs for handling Big Compute

Photo from John Wilkes’ keynote talk @ #MesosCon 2014

15

Page 16: What's new with Apache Spark?

TL;DR: Generational trade-offs for handling Big Compute

CheapStorage

CheapMemory

CheapNetwork

recompute

replicate

reference

(RDD)

(DFS)

(URI)

16

Page 17: What's new with Apache Spark?

A Brief History: Functional Programming for Big Data

2002

2002MapReduce @ Google

2004MapReduce paper

2006Hadoop @ Yahoo!

2004 2006 2008 2010 2012 2014

2014Apache Spark top-level

2010Spark paper

2008Hadoop Summit

17

Page 18: What's new with Apache Spark?

A Brief History: Functional Programming for Big Data

MR doesn’t compose well for large applications, and so specialized systems emerged as workarounds

MapReduce

General Batch Processing Specialized Systems: iterative, interactive, streaming, graph, etc.

Pregel Giraph

Dremel Drill

TezImpala

GraphLab

StormS4

F1

MillWheel

18

Page 19: What's new with Apache Spark?

Spark: Cluster Computing with Working Sets Matei Zaharia, Mosharaf Chowdhury, Michael Franklin, Scott Shenker, Ion Stoica people.csail.mit.edu/matei/papers/2010/hotcloud_spark.pdf !Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica usenix.org/system/files/conference/nsdi12/nsdi12-final138.pdf

circa 2010: a unified engine for enterprise data workflows, based on commodity hardware a decade later…

A Brief History: Functional Programming for Big Data

19

Page 20: What's new with Apache Spark?

In addition to simple map and reduce operations, Spark supports SQL queries, streaming data, and complex analytics such as machine learning and graph algorithms out-of-the-box.

Better yet, combine these capabilities seamlessly into one integrated workflow…

A Brief History: Functional Programming for Big Data

20

Page 21: What's new with Apache Spark?

• generalized patterns ⇒ unified engine for many use cases

• lazy evaluation of the lineage graph ⇒ reduces wait states, better pipelining

• generational differences in hardware ⇒ off-heap use of large memory spaces

• functional programming / ease of use ⇒ reduction in cost to maintain large apps

• lower overhead for starting jobs

• less expensive shuffles

A Brief History: Key distinctions for Spark vs. MapReduce

21

Page 22: What's new with Apache Spark?

Spark Deconstructed

22

Page 23: What's new with Apache Spark?

// load error messages from a log into memory!// then interactively search for various patterns!// https://gist.github.com/ceteri/8ae5b9509a08c08a1132!!// base RDD!val lines = sc.textFile("hdfs://...")!!// transformed RDDs!val errors = lines.filter(_.startsWith("ERROR"))!val messages = errors.map(_.split("\t")).map(r => r(1))!messages.cache()!!// action 1!messages.filter(_.contains("mysql")).count()!!// action 2!messages.filter(_.contains("php")).count()

Spark Deconstructed: Log Mining Example

23

Page 24: What's new with Apache Spark?

Driver

Worker

Worker

Worker

Spark Deconstructed: Log Mining Example

We start with Spark running on a cluster… submitting code to be evaluated on it:

24

Page 25: What's new with Apache Spark?

// base RDD!val lines = sc.textFile("hdfs://...")!!// transformed RDDs!val errors = lines.filter(_.startsWith("ERROR"))!val messages = errors.map(_.split("\t")).map(r => r(1))!messages.cache()!!// action 1!messages.filter(_.contains("mysql")).count()!!// action 2!messages.filter(_.contains("php")).count()

Spark Deconstructed: Log Mining Example

discussing the other part

25

Page 26: What's new with Apache Spark?

Spark Deconstructed: Log Mining Example

scala> messages.toDebugString!res5: String = !MappedRDD[4] at map at <console>:16 (3 partitions)! MappedRDD[3] at map at <console>:16 (3 partitions)! FilteredRDD[2] at filter at <console>:14 (3 partitions)! MappedRDD[1] at textFile at <console>:12 (3 partitions)! HadoopRDD[0] at textFile at <console>:12 (3 partitions)

At this point, take a look at the transformed RDD operator graph:

26

Page 27: What's new with Apache Spark?

Driver

Worker

Worker

Worker

Spark Deconstructed: Log Mining Example

// base RDD!val lines = sc.textFile("hdfs://...")!!// transformed RDDs!val errors = lines.filter(_.startsWith("ERROR"))!val messages = errors.map(_.split("\t")).map(r => r(1))!messages.cache()!!// action 1!messages.filter(_.contains("mysql")).count()!!// action 2!messages.filter(_.contains("php")).count()discussing the other part

27

Page 28: What's new with Apache Spark?

Driver

Worker

Worker

Worker

block 1

block 2

block 3

Spark Deconstructed: Log Mining Example

// base RDD!val lines = sc.textFile("hdfs://...")!!// transformed RDDs!val errors = lines.filter(_.startsWith("ERROR"))!val messages = errors.map(_.split("\t")).map(r => r(1))!messages.cache()!!// action 1!messages.filter(_.contains("mysql")).count()!!// action 2!messages.filter(_.contains("php")).count()discussing the other part

28

Page 29: What's new with Apache Spark?

Driver

Worker

Worker

Worker

block 1

block 2

block 3

Spark Deconstructed: Log Mining Example

// base RDD!val lines = sc.textFile("hdfs://...")!!// transformed RDDs!val errors = lines.filter(_.startsWith("ERROR"))!val messages = errors.map(_.split("\t")).map(r => r(1))!messages.cache()!!// action 1!messages.filter(_.contains("mysql")).count()!!// action 2!messages.filter(_.contains("php")).count()discussing the other part

29

Page 30: What's new with Apache Spark?

Driver

Worker

Worker

Worker

block 1

block 2

block 3

readHDFSblock

readHDFSblock

readHDFSblock

Spark Deconstructed: Log Mining Example

// base RDD!val lines = sc.textFile("hdfs://...")!!// transformed RDDs!val errors = lines.filter(_.startsWith("ERROR"))!val messages = errors.map(_.split("\t")).map(r => r(1))!messages.cache()!!// action 1!messages.filter(_.contains("mysql")).count()!!// action 2!messages.filter(_.contains("php")).count()discussing the other part

30

Page 31: What's new with Apache Spark?

Driver

Worker

Worker

Worker

block 1

block 2

block 3

cache 1

cache 2

cache 3

process,cache data

process,cache data

process,cache data

Spark Deconstructed: Log Mining Example

// base RDD!val lines = sc.textFile("hdfs://...")!!// transformed RDDs!val errors = lines.filter(_.startsWith("ERROR"))!val messages = errors.map(_.split("\t")).map(r => r(1))!messages.cache()!!// action 1!messages.filter(_.contains("mysql")).count()!!// action 2!messages.filter(_.contains("php")).count()discussing the other part

31

Page 32: What's new with Apache Spark?

Driver

Worker

Worker

Worker

block 1

block 2

block 3

cache 1

cache 2

cache 3

Spark Deconstructed: Log Mining Example

// base RDD!val lines = sc.textFile("hdfs://...")!!// transformed RDDs!val errors = lines.filter(_.startsWith("ERROR"))!val messages = errors.map(_.split("\t")).map(r => r(1))!messages.cache()!!// action 1!messages.filter(_.contains("mysql")).count()!!// action 2!messages.filter(_.contains("php")).count()discussing the other part

32

Page 33: What's new with Apache Spark?

// base RDD!val lines = sc.textFile("hdfs://...")!!// transformed RDDs!val errors = lines.filter(_.startsWith("ERROR"))!val messages = errors.map(_.split("\t")).map(r => r(1))!messages.cache()!!// action 1!messages.filter(_.contains("mysql")).count()!!// action 2!messages.filter(_.contains("php")).count()

Driver

Worker

Worker

Worker

block 1

block 2

block 3

cache 1

cache 2

cache 3

Spark Deconstructed: Log Mining Example

discussing the other part

33

Page 34: What's new with Apache Spark?

Driver

Worker

Worker

Worker

block 1

block 2

block 3

cache 1

cache 2

cache 3

processfrom cache

processfrom cache

processfrom cache

Spark Deconstructed: Log Mining Example

// base RDD!val lines = sc.textFile("hdfs://...")!!// transformed RDDs!val errors = lines.filter(_.startsWith("ERROR"))!val messages = errors.map(_.split("\t")).map(r => r(1))!messages.cache()!!// action 1!messages.filter(_.contains(“mysql")).count()!!// action 2!messages.filter(_.contains("php")).count()

discussing the other part

34

Page 35: What's new with Apache Spark?

Driver

Worker

Worker

Worker

block 1

block 2

block 3

cache 1

cache 2

cache 3

Spark Deconstructed: Log Mining Example

// base RDD!val lines = sc.textFile("hdfs://...")!!// transformed RDDs!val errors = lines.filter(_.startsWith("ERROR"))!val messages = errors.map(_.split("\t")).map(r => r(1))!messages.cache()!!// action 1!messages.filter(_.contains(“mysql")).count()!!// action 2!messages.filter(_.contains("php")).count()

discussing the other part

35

Page 36: What's new with Apache Spark?

Looking at the RDD transformations and actions from another perspective…

Spark Deconstructed:

action value

RDDRDDRDD

transformations RDD

// load error messages from a log into memory!

// then interactively search for various patterns!

// https://gist.github.com/ceteri/8ae5b9509a08c08a1132!!// base RDD!

val lines = sc.textFile("hdfs://...")!!// transformed RDDs!

val errors = lines.filter(_.startsWith("ERROR"))!

val messages = errors.map(_.split("\t")).map(r => r(1))!

messages.cache()!!// action 1!

messages.filter(_.contains("mysql")).count()!!// action 2!

messages.filter(_.contains("php")).count()

36

Page 37: What's new with Apache Spark?

Spark Deconstructed:

RDD

// base RDD!val lines = sc.textFile("hdfs://...")

37

Page 38: What's new with Apache Spark?

RDDRDDRDD

transformations RDD

Spark Deconstructed:

// transformed RDDs!val errors = lines.filter(_.startsWith("ERROR"))!val messages = errors.map(_.split("\t")).map(r => r(1))!messages.cache()

38

Page 39: What's new with Apache Spark?

action value

RDDRDDRDD

transformations RDD

Spark Deconstructed:

// action 1!messages.filter(_.contains("mysql")).count()

39

Page 40: What's new with Apache Spark?

Unifying the Pieces

40

Page 41: What's new with Apache Spark?

// http://spark.apache.org/docs/latest/sql-programming-guide.html!!val sqlContext = new org.apache.spark.sql.SQLContext(sc)!import sqlContext._!!// define the schema using a case class!case class Person(name: String, age: Int)!!// create an RDD of Person objects and register it as a table!val people = sc.textFile("examples/src/main/resources/people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt))!!people.registerAsTempTable("people")!!// SQL statements can be run using the SQL methods provided by sqlContext!val teenagers = sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")!!// results of SQL queries are SchemaRDDs and support all the !// normal RDD operations…!// columns of a row in the result can be accessed by ordinal!teenagers.map(t => "Name: " + t(0)).collect().foreach(println)

Unifying the Pieces: Spark SQL

41

Page 42: What's new with Apache Spark?

// http://spark.apache.org/docs/latest/streaming-programming-guide.html!!import org.apache.spark.streaming._!import org.apache.spark.streaming.StreamingContext._!!// create a StreamingContext with a SparkConf configuration!val ssc = new StreamingContext(sparkConf, Seconds(10))!!// create a DStream that will connect to serverIP:serverPort!val lines = ssc.socketTextStream(serverIP, serverPort)!!// split each line into words!val words = lines.flatMap(_.split(" "))!!// count each word in each batch!val pairs = words.map(word => (word, 1))!val wordCounts = pairs.reduceByKey(_ + _)!!// print a few of the counts to the console!wordCounts.print()!!ssc.start() // start the computation!ssc.awaitTermination() // wait for the computation to terminate

Unifying the Pieces: Spark Streaming

42

Page 43: What's new with Apache Spark?

MLI: An API for Distributed Machine Learning Evan Sparks, Ameet Talwalkar, et al. International Conference on Data Mining (2013) http://arxiv.org/abs/1310.5426

Unifying the Pieces: MLlib

// http://spark.apache.org/docs/latest/mllib-guide.html!!val train_data = // RDD of Vector!val model = KMeans.train(train_data, k=10)!!// evaluate the model!val test_data = // RDD of Vector!test_data.map(t => model.predict(t)).collect().foreach(println)!

43

Page 44: What's new with Apache Spark?

// http://spark.apache.org/docs/latest/graphx-programming-guide.html!!import org.apache.spark.graphx._!import org.apache.spark.rdd.RDD! !case class Peep(name: String, age: Int)! !val vertexArray = Array(! (1L, Peep("Kim", 23)), (2L, Peep("Pat", 31)),! (3L, Peep("Chris", 52)), (4L, Peep("Kelly", 39)),! (5L, Peep("Leslie", 45))! )!val edgeArray = Array(! Edge(2L, 1L, 7), Edge(2L, 4L, 2),! Edge(3L, 2L, 4), Edge(3L, 5L, 3),! Edge(4L, 1L, 1), Edge(5L, 3L, 9)! )! !val vertexRDD: RDD[(Long, Peep)] = sc.parallelize(vertexArray)!val edgeRDD: RDD[Edge[Int]] = sc.parallelize(edgeArray)!val g: Graph[Peep, Int] = Graph(vertexRDD, edgeRDD)! !val results = g.triplets.filter(t => t.attr > 7)! !for (triplet <- results.collect) {! println(s"${triplet.srcAttr.name} loves ${triplet.dstAttr.name}")!}

Unifying the Pieces: GraphX

44

Page 45: What's new with Apache Spark?

Spark Streaming

45

Page 46: What's new with Apache Spark?

Let’s consider the top-level requirements for a streaming framework:

• clusters scalable to 100’s of nodes

• low-latency, in the range of seconds(meets 90% of use case needs)

• efficient recovery from failures(which is a hard problem in CS)

• integrates with batch: many co’s run the same business logic both online+offline

Spark Streaming: Requirements

46

Page 47: What's new with Apache Spark?

Therefore, run a streaming computation as: a series of very small, deterministic batch jobs

!• Chop up the live stream into

batches of X seconds

• Spark treats each batch of data as RDDs and processes them using RDD operations

• Finally, the processed results of the RDD operations are returned in batches

Spark Streaming: Requirements

47

Page 48: What's new with Apache Spark?

Therefore, run a streaming computation as: a series of very small, deterministic batch jobs

!• Batch sizes as low as ½ sec,

latency of about 1 sec

• Potential for combining batch processing and streaming processing in the same system

Spark Streaming: Requirements

48

Page 49: What's new with Apache Spark?

Data can be ingested from many sources: Kafka, Flume, Twitter, ZeroMQ, TCP sockets, etc.

Results can be pushed out to filesystems, databases, live dashboards, etc.

Spark’s built-in machine learning algorithms and graph processing algorithms can be applied to data streams

Spark Streaming: Integration

49

Page 50: What's new with Apache Spark?

2012 project started

2013 alpha release (Spark 0.7)

2014 graduated (Spark 0.9)

Spark Streaming: Timeline

Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Matei Zaharia, Tathagata Das, Haoyuan Li, Timothy Hunter, Scott Shenker, Ion Stoica Berkeley EECS (2012-12-14) www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-259.pdf

project lead: Tathagata Das @tathadas

50

Page 51: What's new with Apache Spark?

Typical kinds of applications:

• datacenter operations

• web app funnel metrics

• ad optimization

• anti-fraud

• various telematics

and much much more!

Spark Streaming: Requirements

51

Page 52: What's new with Apache Spark?

Programming Guidespark.apache.org/docs/latest/streaming-programming-guide.html

TD @ Spark Summit 2014 youtu.be/o-NXwFrNAWQ?list=PLTPXxbhUt-YWGNTaDj6HSjnHMxiTD1HCR

“Deep Dive into Spark Streaming”slideshare.net/spark-project/deep-divewithsparkstreaming-tathagatadassparkmeetup20130617

Spark Reference Applicationsdatabricks.gitbooks.io/databricks-spark-reference-applications/

Spark Streaming: Some Excellent Resources

52

Page 53: What's new with Apache Spark?

import org.apache.spark.streaming._!import org.apache.spark.streaming.StreamingContext._!!// create a StreamingContext with a SparkConf configuration!val ssc = new StreamingContext(sparkConf, Seconds(10))!!// create a DStream that will connect to serverIP:serverPort!val lines = ssc.socketTextStream(serverIP, serverPort)!!// split each line into words!val words = lines.flatMap(_.split(" "))!!// count each word in each batch!val pairs = words.map(word => (word, 1))!val wordCounts = pairs.reduceByKey(_ + _)!!// print a few of the counts to the console!wordCounts.print()!!ssc.start()!ssc.awaitTermination()

Quiz: name the bits and pieces…

53

Page 54: What's new with Apache Spark?

A Look Ahead…

54

Page 55: What's new with Apache Spark?

1. Greater Stability and Robustness

• improved high availability via write-ahead logs

• enabled as an optional feature for Spark 1.2

• NB: Spark Standalone can already restart driver

• excellent discussion of fault-tolerance (2012): cs.duke.edu/~kmoses/cps516/dstream.html

• stay tuned: meetup.com/spark-users/events/218108702/

A Look Ahead…

55

Page 56: What's new with Apache Spark?

2. Support for more environments, i.e., beyond Hadoop

• three use cases currently depend on HDFS

• those are being abstracted out

• could then use Cassandra, etc.

A Look Ahead…

56

Page 57: What's new with Apache Spark?

3. Improved support for Python

• e.g., Kafka is not exposed through Python yet (next release goal)

A Look Ahead…

57

Page 58: What's new with Apache Spark?

4. Better flow control

• a somewhat longer-term goal, plus it is a hard problem in general

• poses interesting challenges beyond what other streaming systems have faced

A Look Ahead…

58

Page 59: What's new with Apache Spark?

Demos

59

Page 60: What's new with Apache Spark?

brand new Python support for Streaming in 1.2 github.com/apache/spark/tree/master/examples/src/main/python/streaming

Twitter Streaming Language Classifier databricks.gitbooks.io/databricks-spark-reference-applications/content/twitter_classifier/README.html !!For more Spark learning resources online: databricks.com/spark-training-resources

Demos, as time permits:

60

Page 61: What's new with Apache Spark?

import sys!from pyspark import SparkContext!from pyspark.streaming import StreamingContext!!sc = SparkContext(appName="PyStreamNWC", master="local[*]")!ssc = StreamingContext(sc, Seconds(5))!!lines = ssc.socketTextStream(sys.argv[1], int(sys.argv[2]))!!counts = lines.flatMap(lambda line: line.split(" ")) \! .map(lambda word: (word, 1)) \! .reduceByKey(lambda a, b: a+b)!!counts.pprint()!!ssc.start()!ssc.awaitTermination()

Demo: PySpark Streaming Network Word Count

61

Page 62: What's new with Apache Spark?

import sys!from pyspark import SparkContext!from pyspark.streaming import StreamingContext!!def updateFunc (new_values, last_sum):! return sum(new_values) + (last_sum or 0)!!sc = SparkContext(appName="PyStreamNWC", master="local[*]")!ssc = StreamingContext(sc, Seconds(5))!ssc.checkpoint("checkpoint")!!lines = ssc.socketTextStream(sys.argv[1], int(sys.argv[2]))!!counts = lines.flatMap(lambda line: line.split(" ")) \! .map(lambda word: (word, 1)) \! .updateStateByKey(updateFunc) \! .transform(lambda x: x.sortByKey())!!counts.pprint()!!ssc.start()!ssc.awaitTermination()

Demo: PySpark Streaming Network Word Count - Stateful

62

Page 63: What's new with Apache Spark?

Complementary Frameworks

63

Page 64: What's new with Apache Spark?

Spark Integrations:

Discover Insights

Clean Up Your Data

RunSophisticated

Analytics

Integrate With Many Other

Systems

Use Lots of Different Data Sources

cloud-based notebooks… ETL… the Hadoop ecosystem… widespread use of PyData… advanced analytics in streaming… rich custom search… web apps for data APIs… low-latency + multi-tenancy…

64

Page 65: What's new with Apache Spark?

Databricks Cloud databricks.com/blog/2014/07/14/databricks-cloud-making-big-data-easy.html youtube.com/watch?v=dJQ5lV5Tldw#t=883

Spark Integrations: Unified platform for building Big Data pipelines

65

Page 66: What's new with Apache Spark?

unified compute

Spark + Hadoop + HBase + etc. mapr.com/products/apache-spark

vision.cloudera.com/apache-spark-in-the-apache-hadoop-ecosystem/

hortonworks.com/hadoop/spark/

databricks.com/blog/2014/05/23/pivotal-hadoop-integrates-the-full-apache-spark-stack.html

Spark Integrations: The proverbial Hadoop ecosystem

hadoop ecosystem

66

Page 67: What's new with Apache Spark?

unified compute

Spark + PyData spark-summit.org/2014/talk/A-platform-for-large-scale-neuroscience

cwiki.apache.org/confluence/display/SPARK/PySpark+Internals

Spark Integrations: Leverage widespread use of Python

Py Data

67

Page 68: What's new with Apache Spark?

unified compute

Kafka + Spark + Cassandra datastax.com/documentation/datastax_enterprise/4.5/datastax_enterprise/spark/sparkIntro.html http://helenaedelson.com/?p=991

github.com/datastax/spark-cassandra-connector

github.com/dibbhatt/kafka-spark-consumer

columnar key-valuedata streams

Spark Integrations: Advanced analytics for streaming use cases

68

Page 69: What's new with Apache Spark?

unified compute

Spark + ElasticSearch databricks.com/blog/2014/06/27/application-spotlight-elasticsearch.html

elasticsearch.org/guide/en/elasticsearch/hadoop/current/spark.html

spark-summit.org/2014/talk/streamlining-search-indexing-using-elastic-search-and-spark

document search

Spark Integrations: Rich search, immediate insights

69

Page 70: What's new with Apache Spark?

unified compute

Spark + Play typesafe.com/blog/apache-spark-and-the-typesafe-reactive-platform-a-match-made-in-heaven

web apps

Spark Integrations: Building data APIs with web apps

70

Page 71: What's new with Apache Spark?

unified compute

cluster resources

Spark + Mesos spark.apache.org/docs/latest/running-on-mesos.html

+ Mesosphere + Google Cloud Platform ceteri.blogspot.com/2014/09/spark-atop-mesos-on-google-cloud.html

Spark Integrations: The case for multi-tenancy

71

Page 72: What's new with Apache Spark?

Because Use Cases

72

Page 73: What's new with Apache Spark?

Because Use Cases: +40 known production use cases

73

Page 74: What's new with Apache Spark?

Spark at Twitter: Evaluation & Lessons Learnt Sriram Krishnan slideshare.net/krishflix/seattle-spark-meetup-spark-at-twitter

• Spark can be more interactive, efficient than MR

• Support for iterative algorithms and caching

• More generic than traditional MapReduce

• Why is Spark faster than Hadoop MapReduce?

• Fewer I/O synchronization barriers

• Less expensive shuffle

• More complex the DAG, greater the performance improvement

74

Because Use Cases: Twitter

Page 75: What's new with Apache Spark?

Using Spark to Ignite Data Analytics ebaytechblog.com/2014/05/28/using-spark-to-ignite-data-analytics/

75

Because Use Cases: eBay/PayPal

Page 76: What's new with Apache Spark?

Hadoop and Spark Join Forces in Yahoo Andy Feng spark-summit.org/talk/feng-hadoop-and-spark-join-forces-at-yahoo/

76

Because Use Cases: Yahoo!

Page 77: What's new with Apache Spark?

Collaborative Filtering with Spark Chris Johnson slideshare.net/MrChrisJohnson/collaborative-filtering-with-spark

• collab filter (ALS) for music recommendation

• Hadoop suffers from I/O overhead

• show a progression of code rewrites, converting a Hadoop-based app into efficient use of Spark

77

Because Use Cases: Spotify

Page 78: What's new with Apache Spark?

Because Use Cases: Sharethrough

Sharethrough Uses Spark Streaming to Optimize Bidding in Real Time Russell Cardullo, Michael Ruggier 2014-03-25 databricks.com/blog/2014/03/25/sharethrough-and-spark-streaming.html

78

• the profile of a 24 x 7 streaming app is different than an hourly batch job…

• take time to validate output against the input…

• confirm that supporting objects are being serialized…

• the output of your Spark Streaming job is only as reliable as the queue that feeds Spark…

• monoids…

Page 79: What's new with Apache Spark?

Because Use Cases: Ooyala

Productionizing a 24/7 Spark Streaming service on YARN Issac Buenrostro, Arup Malakar 2014-06-30 spark-summit.org/2014/talk/productionizing-a-247-spark-streaming-service-on-yarn

79

• state-of-the-art ingestion pipeline, processing over two billion video events a day

• how do you ensure 24/7 availability and fault tolerance?

• what are the best practices for Spark Streaming and its integration with Kafka and YARN?

• how do you monitor and instrument the various stages of the pipeline?

Page 80: What's new with Apache Spark?

Because Use Cases: Viadeo

Spark Streaming As Near Realtime ETL Djamel Zouaoui 2014-09-18 slideshare.net/DjamelZouaoui/spark-streaming

80

• Spark Streaming is topology-free

• workers and receivers are autonomous and independent

• paired with Kafka, RabbitMQ

• 8 machines / 120 cores

• use case for recommender system

• issues: how to handle lost data, serialization

Page 81: What's new with Apache Spark?

Because Use Cases: Stratio

Stratio Streaming: a new approach to Spark Streaming David Morales, Oscar Mendez 2014-06-30 spark-summit.org/2014/talk/stratio-streaming-a-new-approach-to-spark-streaming

81

• Stratio Streaming is the union of a real-time messaging bus with a complex event processing engine using Spark Streaming

• allows the creation of streams and queries on the fly

• paired with Siddhi CEP engine and Apache Kafka

• added global features to the engine such as auditing and statistics

Page 82: What's new with Apache Spark?

Because Use Cases: Guavus

Guavus Embeds Apache Spark into its Operational Intelligence Platform Deployed at the World’s Largest Telcos Eric Carr 2014-09-25 databricks.com/blog/2014/09/25/guavus-embeds-apache-spark-into-its-operational-intelligence-platform-deployed-at-the-worlds-largest-telcos.html

82

• 4 of 5 top mobile network operators, 3 of 5 top Internet backbone providers, 80% MSOs in NorAm

• analyzing 50% of US mobile data traffic, +2.5 PB/day

• latency is critical for resolving operational issues before they cascade: 2.5 MM transactions per second

• “analyze first” not “store first ask questions later”

Page 83: What's new with Apache Spark?

Why Spark is the Next Top (Compute) Model Dean Wampler slideshare.net/deanwampler/spark-the-next-top-compute-model

• Hadoop: most algorithms are much harder to implement in this restrictive map-then-reduce model

• Spark: fine-grained “combinators” for composing algorithms

• slide #67, any questions?

83

Because Use Cases: Typesafe

Page 84: What's new with Apache Spark?

Installing the Cassandra / Spark OSS Stack Al Tobeytobert.github.io/post/2014-07-15-installing-cassandra-spark-stack.html

• install+config for Cassandra and Spark together

• spark-cassandra-connector integration

• examples show a Spark shell that can access tables in Cassandra as RDDs with types pre-mapped and ready to go

84

Because Use Cases: DataStax

Page 85: What's new with Apache Spark?

One platform for all: real-time, near-real-time, and offline video analytics on Spark Davis Shepherd, Xi Liu spark-summit.org/talk/one-platform-for-all-real-time-near-real-time-and-offline-video-analytics-on-spark

85

Because Use Cases: Conviva

Page 86: What's new with Apache Spark?

Resources

86

Page 87: What's new with Apache Spark?

Apache Spark developer certificate program

• http://oreilly.com/go/sparkcert

• defined by Spark experts @Databricks

• assessed by O’Reilly Media

• establishes the bar for Spark expertise

certification:

87

Page 88: What's new with Apache Spark?

community:

spark.apache.org/community.html

video+slide archives: spark-summit.org

events worldwide: goo.gl/2YqJZK

resources: databricks.com/spark-training-resources

workshops: databricks.com/spark-training

Intro to Spark

SparkAppDev

SparkDevOps

SparkDataSci

Distributed ML on Spark

Streaming Apps on Spark

Spark + Cassandra

88

Page 89: What's new with Apache Spark?

books:

Fast Data Processing with Spark Holden Karau Packt (2013) shop.oreilly.com/product/9781782167068.do

Spark in Action Chris FreglyManning (2015*) sparkinaction.com/

Learning Spark Holden Karau, Andy Konwinski, Matei ZahariaO’Reilly (2015*) shop.oreilly.com/product/0636920028512.do

89

Page 90: What's new with Apache Spark?

events:Big Data Spain Madrid, Nov 17-18 bigdataspain.org

Strata EUBarcelona, Nov 19-21 strataconf.com/strataeu2014 Data Day Texas Austin, Jan 10 datadaytexas.com Strata CA San Jose, Feb 18-20 strataconf.com/strata2015 Spark Summit East NYC, Mar 18-19 spark-summit.org/east

Strata EULondon, May 5-7 strataconf.com/big-data-conference-uk-2015 Spark Summit 2015 SF, Jun 15-17 spark-summit.org

90

Page 91: What's new with Apache Spark?

presenter:

Just Enough Math O’Reilly, 2014

justenoughmath.compreview: youtu.be/TQ58cWgdCpA

monthly newsletter for updates, events, conf summaries, etc.: liber118.com/pxn/

Enterprise Data Workflows with Cascading O’Reilly, 2013

shop.oreilly.com/product/0636920028536.do

91