Meet Up - Spark Stream Processing + Kafka

Embed Size (px)

Citation preview

Click to edit the title text format

Satendra KumarSr. Software ConsultantKnoldus Software LLP

Stream Processing

Topics Covered

What is Stream

What is Stream processing

The challenges of stream processing

Overview Spark Streaming

Receivers

Custom receivers

Transformations on Dstreams

Failures

Fault-tolerance Semantics

Kafka Integration

Performance Tuning

What is Stream

astreamis asequenceof data elements made available over time

A stream is a sequence of data elements made available over time and which can be accessed in sequential order.

Eg. YouTube video buffering.

What is Stream processing

Stream processing is the real-time processing of data continuously, concurrently, and in a record-by-record fashion.

It treats data not as static tables or files, but as a continuous infinite stream of data integrated from both live and historical sources.

Partitioning & Scalability

Semantics & Fault tolerance

Unifying the streams

Time

Re-Processing

The challenges of stream processing

Spark Streaming

Provides a way to process the live data streams.

Scalable, high-throughput, fault-tolerant.

Built top of core Spark API.

API is very similar to Spark core API.

Supports many sources like Kafka, Flume, Kinesis or TCP sockets.

Currently based on RDDs.

Data can be ingested from many sources like Kafka, Flume, Kinesis.

Data can be processed using complex algorithms expressed with high-level functions like map, reduce, join and window.

Processed data can be pushed out to filesystems, databases, and live dashboards

Spark Streaming

Spark Streaming

Spark Streaming

Spark Streaming

Discretized Streams

It provides a high-level abstraction called discretized stream or DStream, which represents a continuous stream of data;

DStreams can be created either from input data streams from sources such as Kafka, Flume, and Kinesis, or by applying high-level operations on other Dstreams.

DStream is represented as a sequence of RDDs.

High level overview

High level overview

High level overview

High level overview

High level overview

High level overview

High level overview

High level overview

Driver Program

object StreamingApp extends App {

val sparkConf = new SparkConf().setMaster("local[*]").setAppName("StreamingApp")

val streamingContext = new StreamingContext(sparkConf, Seconds(5))

val lines: ReceiverInputDStream[String] = streamingContext.socketTextStream("localhost", 9000)

val words: DStream[String] = lines.flatMap(_.split(" "))

val filteredWords: DStream[String] = words.filter(!_.trim.isEmpty)

val pairs: DStream[(String, Int)] = filteredWords.map(word => (word, 1))

val wordCounts: DStream[(String, Int)] = pairs.reduceByKey(_ + _)

wordCounts.print()

streamingContext.start()streamingContext.awaitTermination()

}

Driver Program

object StreamingApp extends App {

val sparkConf = new SparkConf().setMaster("local[*]").setAppName("StreamingApp")

val streamingContext = new StreamingContext(sparkConf, Seconds(5))

val lines: ReceiverInputDStream[String] = streamingContext.socketTextStream("localhost", 9000)

val words: DStream[String] = lines.flatMap(_.split(" "))

val filteredWords = words.filter(!_.trim.isEmpty)

val pairs: DStream[(String, Int)] = filteredWords.map(word => (word, 1))

val wordCounts: DStream[(String, Int)] = pairs.reduceByKey(_ + _)

wordCounts.print()

streamingContext.start()streamingContext.awaitTermination()

}

Streaming Context

Driver Program

object StreamingApp extends App {

val sparkConf = new SparkConf().setMaster("local[*]").setAppName("StreamingApp")

val streamingContext = new StreamingContext(sparkConf, Seconds(5))

val lines: ReceiverInputDStream[String] = streamingContext.socketTextStream("localhost", 9000)

val words: DStream[String] = lines.flatMap(_.split(" "))

val filteredWords = words.filter(!_.trim.isEmpty)

val pairs: DStream[(String, Int)] = filteredWords.map(word => (word, 1))

val wordCounts: DStream[(String, Int)] = pairs.reduceByKey(_ + _)

wordCounts.print()

streamingContext.start()streamingContext.awaitTermination()

}

Streaming Context

Batch Interval

Driver Program

object StreamingApp extends App {

val sparkConf = new SparkConf().setMaster("local[*]").setAppName("StreamingApp")

val streamingContext = new StreamingContext(sparkConf, Seconds(5))

val lines: ReceiverInputDStream[String] = streamingContext.socketTextStream("localhost", 9000)

val words: DStream[String] = lines.flatMap(_.split(" "))

val filteredWords = words.filter(!_.trim.isEmpty)

val pairs: DStream[(String, Int)] = filteredWords.map(word => (word, 1))

val wordCounts: DStream[(String, Int)] = pairs.reduceByKey(_ + _)

wordCounts.print()

streamingContext.start()streamingContext.awaitTermination()

}

Streaming Context

Batch Interval

Receiver

Driver Program

object StreamingApp extends App {

val sparkConf = new SparkConf().setMaster("local[*]").setAppName("StreamingApp")

val streamingContext = new StreamingContext(sparkConf, Seconds(5))

val lines: ReceiverInputDStream[String] = streamingContext.socketTextStream("localhost", 9000)

val words: DStream[String] = lines.flatMap(_.split(" "))

val filteredWords: DStream[String] = words.filter(!_.trim.isEmpty)

val pairs: DStream[(String, Int)] = filteredWords.map(word => (word, 1))

val wordCounts: DStream[(String, Int)] = pairs.reduceByKey(_ + _)

wordCounts.print()

streamingContext.start()streamingContext.awaitTermination()

}

Streaming Context

Batch Interval

Receiver

Transformations on DStreams

Driver Program

object StreamingApp extends App {

val sparkConf = new SparkConf().setMaster("local[*]").setAppName("StreamingApp")

val streamingContext = new StreamingContext(sparkConf, Seconds(5))

val lines: ReceiverInputDStream[String] = streamingContext.socketTextStream("localhost", 9000)

val words: DStream[String] = lines.flatMap(_.split(" "))

val filteredWords = words.filter(!_.trim.isEmpty)

val pairs: DStream[(String, Int)] = filteredWords.map(word => (word, 1))

val wordCounts: DStream[(String, Int)] = pairs.reduceByKey(_ + _)

wordCounts.print()

streamingContext.start()streamingContext.awaitTermination()

}

Streaming Context

Batch Interval

Receiver

Transformations on DStreams

Output Operations on DStreams

Driver Program

object StreamingApp extends App {

val sparkConf = new SparkConf().setMaster("local[*]").setAppName("StreamingApp")

val streamingContext = new StreamingContext(sparkConf, Seconds(5))

val lines: ReceiverInputDStream[String] = streamingContext.socketTextStream("localhost", 9000)

val words: DStream[String] = lines.flatMap(_.split(" "))

val filteredWords = words.filter(!_.trim.isEmpty)

val pairs: DStream[(String, Int)] = filteredWords.map(word => (word, 1))

val wordCounts: DStream[(String, Int)] = pairs.reduceByKey(_ + _)

wordCounts.print()

streamingContext.start()streamingContext.awaitTermination()

}

Streaming Context

Batch Interval

Receiver

Transformations on DStreams

Output Operations on DStreams

Start the Streaming

Important Points

Once a context has been started, no new streaming computations can be set up or added to it.

Once a context has been stopped, it cannot be restarted.

Only one StreamingContext can be active in a JVM at the same time.

stop() on StreamingContext also stops the SparkContext. To stop only the StreamingContext, set the optional parameter of stop() called stopSparkContext to false.

A SparkContext can be re-used to create multiple StreamingContexts, as long as the previous StreamingContext is stopped (without stopping the SparkContext) before the next StreamingContext is created.

Spark Streaming Concept

Spark streaming is based on micro-batch architecture.

Spark streaming continuously receives live input data streams and divides the data into batches.

New batches are created at regular time intervals called batch interval.

Each batch have N numbers blocks. Where N = batch-interval / block-interval For eg. If batch interval = 1 second and block interval= 200ms(by default) then each batch have 5 blocks.

Transforming DStream

Transforming DStream

Transforming DStream

Transforming DStream

Transforming DStream

DStream is represented by a continuous series of RDDs

Each RDD in a DStream contains data from a certain interval

Any operation applied on a DStream translates to operations on the underlying RDDs

Processing time of a batch should less than or equal to batch interval.

Transformations on DStreams

def map[U: ClassTag](mapFunc: T => U): DStream[U]

def flatMap[U: ClassTag](flatMapFunc: T => TraversableOnce[U]): DStream[U]

def filter(filterFunc: T => Boolean): DStream[T]

def reduce(reduceFunc: (T, T) => T): DStream[T]

def count(): DStream[Long]

def repartition(numPartitions: Int): DStream[T]

def countByValue(numPartitions: Int = ssc.sc.defaultParallelism): DStream[(T, Long)]

def transform[U: ClassTag](transformFunc: RDD[T] => RDD[U]): DStream[U]

Transformations on PairDStream

def groupByKey(): DStream[(K, Iterable[V])]

def reduceByKey(reduceFunc: (V, V) => V, numPartitions: Int): DStream[(K, V)]

def join[W: ClassTag](other: DStream[(K, W)]): DStream[(K, (V, W))]

def updateStateByKey[S: ClassTag]( updateFunc: (Seq[V], Option[S]) => Option[S],partitioner: Partitioner): DStream[(K, S)]

def cogroup[W: ClassTag]( other: DStream[(K, W)], numPartitions: Int): DStream[(K, (Iterable[V], Iterable[W]))]

def mapValues[U: ClassTag](mapValuesFunc: V => U): DStream[(K, U)]

def leftOuterJoin[W: ClassTag]( other: DStream[(K, W)],numPartitions: Int): DStream[(K, (V, Option[W]))]

def rightOuterJoin[W: ClassTag]( other: DStream[(K, W)], numPartitions: Int): DStream[(K, (Option[V], W))]

updateStateByKey

object StreamingApp extends App {

val sparkConf = new SparkConf().setMaster("local[*]").setAppName("StreamingApp")

val streamingContext = new StreamingContext(sparkConf, Seconds(5))

streamingContext.checkpoint(".")

val lines = streamingContext.socketTextStream("localhost", 9000)

val words: DStream[String] = lines.flatMap(_.split(" "))

val filteredWords: DStream[String] = words.filter(!_.trim.isEmpty)

val pairs: DStream[(String, Int)] = filteredWords.map(word => (word, 1))

val updatedState: DStream[(String, Int)] =pairs.updateStateByKey[Int] { (newValues: Seq[Int], state: Option[Int]) => Some(newValues.sum +state.getOrElse(0)) }

updatedState.print()

streamingContext.start()streamingContext.awaitTermination()

}

Window Operations

Spark Streaming also provideswindowed computations, which allow you to apply transformations over a sliding window of data.

Window operation needs to specify two parameters:window length - The duration of the window.

sliding interval - The interval at which the window operation is performed.

Window Operations

def window(windowDuration: Duration): DStream[T] def window(windowDuration: Duration, slideDuration: Duration): DStream[T] def reduceByWindow(reduceFunc: (T, T) => T, windowDuration: Duration, slideDuration: Duration): DStream[T] def countByWindow(windowDuration: Duration, slideDuration: Duration): DStream[Long] def countByValueAndWindow(windowDuration: Duration, slideDuration: Duration,numPartitions: Int): DStream[(T, Long)] //pairDStream Operations def groupByKeyAndWindow(windowDuration: Duration): DStream[(K, Iterable[V])] def groupByKeyAndWindow(windowDuration: Duration, slideDuration: Duration): DStream[(K, Iterable[V])] def reduceByKeyAndWindow(reduceFunc: (V, V) => V,windowDuration: Duration): DStream[(K, V)] def reduceByKeyAndWindow(reduceFunc: (V, V) => V, windowDuration: Duration,slideDuration: Duration): DStream[(K, V)]

Window Operations

pairs.window(Seconds(15), Seconds(10))

filteredWords.reduceByWindow((a, b) => a +", "+ b, Seconds(15), Seconds(10))

pairs.reduceByKeyAndWindow((a: Int, b: Int) => a + b, Seconds(15), Seconds(10))

Output Operations on DStreams

def print(num: Int): Unit

def saveAsObjectFiles(prefix: String, suffix: String = ""): Unit

def saveAsTextFiles(prefix: String, suffix: String = ""): Unit

def foreachRDD(foreachFunc: RDD[T] => Unit): Unit

def saveAsHadoopFiles[F restart("Error reading file " + path, ex) }}

Spark Streaming can receive streaming data from any arbitrary data source beyond the ones for which it has built-in support (that is, beyond Flume, Kafka, Kinesis, files, sockets, etc.).

This requires the developer to implement a receiver that is customized for receiving data from the concerned data source.

This guide walks through the process of implementing a custom receiver and using it in a Spark Streaming application.

Note that custom receivers can be implemented in Scala or Java.

Custom Receiver

object CustomReceiver extends App {

val sparkConf = new SparkConf().setAppName("CustomReceiver") val ssc = new StreamingContext(sparkConf, Seconds(1))

val lines = ssc.receiverStream(new CustomReceiver(args(0))) val words = lines.flatMap(_.split(" ")) val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _) wordCounts.print()

ssc.start() ssc.awaitTermination()}

Spark Streaming can receive streaming data from any arbitrary data source beyond the ones for which it has built-in support (that is, beyond Flume, Kafka, Kinesis, files, sockets, etc.).

This requires the developer to implement a receiver that is customized for receiving data from the concerned data source.

This guide walks through the process of implementing a custom receiver and using it in a Spark Streaming application.

Note that custom receivers can be implemented in Scala or Java.

Failure is everywhere

Fault-tolerance Semantics

Streaming system provides zero data loss guarantees despite any kind of failure in the system.

At least once- Each record will be processed one or more times.

Exactly once- Each record will be processed exactly once - no data will be lost and no data will be processed multiple times

At most once: Each record will be either processed once or not processed at all.

At least once: Each record will be processed one or more times. This is stronger than at-most once as it ensure that no data will be lost. But there may be duplicates.

Exactly once: Each record will be processed exactly once - no data will be lost and no data will be processed multiple times. This is obviously the strongest guarantee of the three.

Kinds of Failure

There are two kind of failure:Executor failure1) Data received and replicated 2) Data received but not replicated

Driver failure

Executor failure

Executor failure

Executor failure

Executor failure

Executor failure

Executor failure

Data would be lost ?

Executor with WAL

Executor failure

Enable write ahead logs

object Streaming2App extends App {

val checkpointDirectory ="checkpointDir"//It should be fault-tolerant & reliable file system(e.g. HDFS, S3, etc.)

val sparkConf = new SparkConf().setMaster("local[*]").setAppName("StreamingApp")

sparkConf.set("spark.streaming.receiver.writeAheadLog.enable", "true")

val streamingContext = new StreamingContext(sparkConf, Seconds(5))

streamingContext.checkpoint(checkpointDirectory)

val lines: ReceiverInputDStream[String] = streamingContext.socketTextStream("localhost", 9000)

val words: DStream[String] = lines.flatMap(_.split(" "))

val filteredWords: DStream[String] = words.filter(!_.trim.isEmpty)

val pairs: DStream[(String, Int)] = filteredWords.map(word => (word, 1))

val wordCounts: DStream[(String, Int)] = pairs.reduceByKey(_ + _)

wordCounts.print(20)

streamingContext.start() streamingContext.awaitTermination()}

Enable write logs

Enable write ahead logs

object Streaming2App extends App {

val checkpointDirectory ="checkpointDir"//It should be fault-tolerant & reliable file system(e.g. HDFS, S3, etc.)

val sparkConf = new SparkConf().setMaster("local[*]").setAppName("StreamingApp")

sparkConf.set("spark.streaming.receiver.writeAheadLog.enable", "true")

val streamingContext = new StreamingContext(sparkConf, Seconds(5))

streamingContext.checkpoint(checkpointDirectory)

val lines: ReceiverInputDStream[String] = streamingContext.socketTextStream("localhost", 9000)

val words: DStream[String] = lines.flatMap(_.split(" "))

val filteredWords: DStream[String] = words.filter(!_.trim.isEmpty)

val pairs: DStream[(String, Int)] = filteredWords.map(word => (word, 1))

val wordCounts: DStream[(String, Int)] = pairs.reduceByKey(_ + _)

wordCounts.print(20)

streamingContext.start() streamingContext.awaitTermination()}

Enable write logs

Enable checkpointing

Enable write ahead logs

1) For WAL first need to enable checkpointing - streamingContext.checkpoint(checkpointDirectory)

2) Enable WAL in spark configuration -sparkConf.set("spark.streaming.receiver.writeAheadLog.enable","true")

3) Receiver should be reliable - Acknowledge source only after data saved to WAL - Unacknowledged data will be replayed from source by restated receiver

4) Disable in-memory replication (Already replicated By HDFS) - Use StorageLevel.MEMORY_AND_DISK_SER for input DStream

Driver failure

Driver failure

Driver failure

Driver failure

Driver failure

How to recover from this Failure ?

Driver with checkpointing

Dstream Checkpointing : Periodically save the DAG of DStream to fault-tolerant storage.

Driver failure

Recover from Driver failure

Recover from Driver failure

Recover from Driver failure

1) Configure Automatic driver restart -All cluster managers support this

2) Set a checkpoint directory - Directory should be in fault-tolerant & reliable file system (e.g., HDFS, S3, etc.) - streamingContext.checkpoint(checkpointDirectory)

3) Driver should be restart using checkpointing

Configure Automatic driver restart

Spark Standalone - use spark-submit with cluster mode and - - supervise

YARN -use spark-submit with cluster mode

Mesos -Marathon can restart applications or use - - supervise flag

Configure Checkpointing

object RecoverableWordCount { //should a fault-tolerant,reliable file system(e.g.HDFS,S3, etc.) val checkpointDirectory = "checkpointDir"

def createContext() = {

val sparkConf = new SparkConf().setAppName("StreamingApp")

val streamingContext = new StreamingContext(sparkConf, Seconds(1))

streamingContext.checkpoint(checkpointDirectory)

val lines = streamingContext.socketTextStream("localhost", 9000)

val words: DStream[String] = lines.flatMap(_.split(" "))

val filteredWords: DStream[String] = words.filter(!_.trim.isEmpty)

val pairs: DStream[(String, Int)] = filteredWords.map(word => (word, 1))

val wordCounts: DStream[(String, Int)] = pairs.reduceByKey(_ + _)

wordCounts.print(20) streamingContext }

}

Driver should be restart using checkpointing

object StreamingApp extends App { import RecoverableWordCount._

val streamingContext = StreamingContext.getOrCreate(checkpointDirectory, createContext _)

//do other operations

streamingContext.start()

streamingContext.awaitTermination()}

Note that checkpointing of RDDs incurs the cost of saving to reliable storage. This may cause an increase in the processing time of those batches where RDDs get checkpointed. Hence, the interval of checkpointing needs to be set carefully. At small batch sizes (say 1 second), checkpointing every batch may significantly reduce operation throughput. Conversely, checkpointing too infrequently causes the lineage and task sizes to grow, which may have detrimental effects. For stateful transformations that require RDD checkpointing, the default interval is a multiple of the batch interval that is at least 10 seconds. It can be set by using dstream.checkpoint(checkpointInterval).

Typically, a checkpoint interval of 5 - 10 sliding intervals of a DStream is a good setting to try.

Driver should be restart using checkpointing

object StreamingApp extends App { import RecoverableWordCount._

val streamingContext = StreamingContext.getOrCreate(checkpointDirectory, createContext _)

//do other operations

streamingContext.start()

streamingContext.awaitTermination()}

Note that checkpointing of RDDs incurs the cost of saving to reliable storage. This may cause an increase in the processing time of those batches where RDDs get checkpointed. Hence, the interval of checkpointing needs to be set carefully. At small batch sizes (say 1 second), checkpointing every batch may significantly reduce operation throughput. Conversely, checkpointing too infrequently causes the lineage and task sizes to grow, which may have detrimental effects. For stateful transformations that require RDD checkpointing, the default interval is a multiple of the batch interval that is at least 10 seconds. It can be set by using dstream.checkpoint(checkpointInterval).

Typically, a checkpoint interval of 5 - 10 sliding intervals of a DStream is a good setting to try.

Checkpointing

There are two types of data that are checkpointed.1) Metadata checkpointing -Configuration -DStream operations -Incomplete batches

2) Data checkpointing - Saving of the generated RDDs to reliable storage. This is necessary in somestateful transformations that combine data across multiple batches.

Checkpointing Latency

Checkpointing of RDDs incurs the cost of saving to reliable storage. The interval of checkpointing needs to be set carefully.

dstream.checkpoint( Seconds( (batch interval)*10 ) )

A checkpoint interval of 5 - 10 sliding intervals of a DStream is a good setting to try.

Note that checkpointing of RDDs incurs the cost of saving to reliable storage. This may cause an increase in the processing time of those batches where RDDs get checkpointed. Hence, the interval of checkpointing needs to be set carefully. At small batch sizes (say 1 second), checkpointing every batch may significantly reduce operation throughput. Conversely, checkpointing too infrequently causes the lineage and task sizes to grow, which may have detrimental effects. For stateful transformations that require RDD checkpointing, the default interval is a multiple of the batch interval that is at least 10 seconds. It can be set by using dstream.checkpoint(checkpointInterval).

Typically, a checkpoint interval of 5 - 10 sliding intervals of a DStream is a good setting to try.

Fault-tolerance Semantics

Fault-tolerance Semantics

Fault-tolerance Semantics

Fault-tolerance Semantics

Fault-tolerance Semantics

Fault-tolerance Semantics

Spark Streaming & Kafka Integration

Why Kafka ?

Velocity & volume of streaming data

Reprocessing of streaming

Reliable receiver complexity

Checkpoint complexity

Upgrading Application Code

Kafka Integration

There are two approaches to integrate Kafka with Spark Streaming:

Receiver-based Approach

Direct Approach

Receiver-based Approach

https://databricks.com/blog/2015/03/30/improvements-to-kafka-integration-of-spark-streaming.html

Receiver-based Approach

import org.apache.spark.SparkConfimport org.apache.spark.streaming.kafka.KafkaUtilsimport org.apache.spark.streaming.{Seconds, StreamingContext}

object ReceiverBasedStreaming extends App {

val group = "streaming-test-group"val zkQuorum = "localhost:2181"val topics = Map("streaming_queue" -> 1)

val sparkConf = new SparkConf().setAppName("ReceiverBasedStreamingApp")sparkConf.set("spark.streaming.receiver.writeAheadLog.enable", "true")

val ssc = new StreamingContext(sparkConf, Seconds(2))

val lines = KafkaUtils.createStream(ssc, zkQuorum, group, topics).map { case (key, message) => message }val words = lines.flatMap(_.split(" "))val wordCounts = words.map(x => (x, 1L)).reduceByKey(_ + _)wordCounts.print()

ssc.start()ssc.awaitTermination()

}

Direct Approach

https://databricks.com/blog/2015/03/30/improvements-to-kafka-integration-of-spark-streaming.html

Direct Approach

import kafka.serializer.StringDecoderimport org.apache.spark.SparkConfimport org.apache.spark.streaming._import org.apache.spark.streaming.dstream.InputDStreamimport org.apache.spark.streaming.kafka._

object KafkaDirectStreaming extends App {

val brokers = "localhost:9092"

val sparkConf = new SparkConf().setAppName("KafkaDirectStreaming") val ssc = new StreamingContext(sparkConf, Seconds(2)) ssc.checkpoint("checkpointDir") //offset recovery

val topics = Set("streaming_queue") val kafkaParams = Map[String, String]("metadata.broker.list" -> brokers) val messages: InputDStream[(String, String)] = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topics)

val lines = messages.map { case (key, message) => message } val words = lines.flatMap(_.split(" ")) val wordCounts = words.map(x => (x, 1L)).reduceByKey(_ + _) wordCounts.print()

ssc.start() ssc.awaitTermination()}

Direct Approach

Direct Approach has the following advantages over the receiver-based approach:

Simplified Parallelism

Efficiency

Exactly-once semantics

Simplified Parallelism: No need to create multiple input Kafka streams and union them. With directStream, Spark Streaming will create as many RDD partitions as there are Kafka partitions to consume, which will all read data from Kafka in parallel. So there is a one-to-one mapping between Kafka and RDD partitions, which is easier to understand and tune.

Efficiency: Achieving zero-data loss in the first approach required the data to be stored in a Write Ahead Log, which further replicated the data. This is actually inefficient as the data effectively gets replicated twice - once by Kafka, and a second time by the Write Ahead Log. This second approach eliminates the problem as there is no receiver, and hence no need for Write Ahead Logs. As long as you have sufficient Kafka retention, messages can be recovered from Kafka.

Exactly-once semantics: The first approach uses Kafkas high level API to store consumed offsets in Zookeeper. This is traditionally the way to consume data from Kafka. While this approach (in combination with write ahead logs) can ensure zero data loss (i.e. at-least once semantics), there is a small chance some records may get consumed twice under some failures. This occurs because of inconsistencies between data reliably received by Spark Streaming and offsets tracked by Zookeeper. Hence, in this second approach, we use simple Kafka API that does not use Zookeeper. Offsets are tracked by Spark Streaming within its checkpoints. This eliminates inconsistencies between Spark Streaming and Zookeeper/Kafka, and so each record is received by Spark Streaming effectively exactly once despite failures. In order to achieve exactly-once semantics for output of your results, your output operation that saves the data to an external data store must be either idempotent, or an atomic transaction that saves results and offsets (see Semantics of output operations in the main programming guide for further information).

Performance Tuning

For best performance of a Spark Streaming application we need to consider two things:

Reducing the Batch Processing Times

Setting the Right Batch Interval

Reducing the Batch Processing Times

Level of Parallelism in Data Receiving

Level of Parallelism in Data Processing

Data Serialization -Input data -Persisted RDDs generated by Streaming Operations

Task Launching Overheads -Running Spark in Standalone mode or coarse-grained Mesos mode leads to better task launch times.

Level of Parallelism in Data Receiving- 1) Create multple receivers and those result a multple dstreams.These multiple DStreams can be unioned together to create a single DStream. Then the transformations that were being applied on a single input DStream can be applied on the unified stream. For example kafka one topic on receiver.2) Another parameter that should be considered is the receivers blocking interval.For most receivers, the received data is coalesced together into blocks of data before storing inside Sparks memory. The number of tasks per receiver per batch will be approximately (batch interval / block interval).inputStream.repartition()).Level of Parallelism in Data Processing-Cluster resources can be under-utilized if the number of parallel tasks used in any stage of the computation is not high enough. For example, for distributed reduce operations like reduceByKey and reduceByKeyAndWindow, the default number of parallel tasks is controlled by the spark.default.parallelism configuration property. You can pass the level of parallelism as an argument (see PairDStreamFunctions documentation), or set the spark.default.parallelism configuration property to change the default.Input data: By default, the input data received through Receivers is stored in the executors memory with StorageLevel.MEMORY_AND_DISK_SER_2. That is, the data is serialized into bytes to reduce GC overheads, and replicated for tolerating executor failures. Also, the data is kept first in memory, and spilled over to disk only if the memory is insufficient to hold all of the input data necessary for the streaming computation. This serialization obviously has overheads the receiver must deserialize the received data and re-serialize it using Sparks serialization format.Persisted RDDs generated by Streaming Operations: RDDs generated by streaming computations may be persisted in memory. For example, window operations persist data in memory as they would be processed multiple times. However, unlike the Spark Core default of StorageLevel.MEMORY_ONLY, persisted RDDs generated by streaming computations are persisted with StorageLevel.MEMORY_ONLY_SER (i.e. serialized) by default to minimize GC overheads.

In both cases, using Kryo serialization can reduce both CPU and memory overheads. See the Spark Tuning Guide for more details.

Setting the Right Batch Interval

Batch processing time should be less than the batch interval.

Memory Tuning

-Persistence Level of Dstreams -Clearing old data -CMS Garbage Collector

A good approach to figure out the right batch size for your application is to test it with a conservative batch interval (say, 5-10 seconds) and a low data rate.Persistence Level of DStreams: As mentioned earlier in the Data Serialization section, the input data and RDDs are by default persisted as serialized bytes. This reduces both the memory usage and GC overheads, compared to deserialized persistence. Enabling Kryo serialization further reduces serialized sizes and memory usage. Further reduction in memory usage can be achieved with compression (see the Spark configuration spark.rdd.compress), at the cost of CPU time.

Clearing old data: By default, all input data and persisted RDDs generated by DStream transformations are automatically cleared. Spark Streaming decides when to clear the data based on the transformations that are used. For example, if you are using a window operation of 10 minutes, then Spark Streaming will keep around the last 10 minutes of data, and actively throw away older data. Data can be retained for a longer duration (e.g. interactively querying older data) by setting streamingContext.remember.

CMS Garbage Collector: Use of the concurrent mark-and-sweep GC is strongly recommended for keeping GC-related pauses consistently low. Even though concurrent GC is known to reduce the overall processing throughput of the system, its use is still recommended to achieve more consistent batch processing times. Make sure you set the CMS GC on both the driver (using --driver-java-options in spark-submit) and the executors (using Spark configuration spark.executor.extraJavaOptions).

Code samples

https://github.com/knoldus/spark-streaming-meetup

https://github.com/knoldus/real-time-stream-processing-engine

https://github.com/knoldus/kafka-tweet-producer

Questions & DStream[Answer]

References

http://spark.apache.org/docs/latest/streaming-programming-guide.html

http://spark.apache.org/docs/latest/configuration.html#spark-streaming

http://spark.apache.org/docs/latest/streaming-kafka-integration.html

http://spark.apache.org/docs/latest/tuning.html

https://databricks.com/blog/2015/03/30/improvements-to-kafka-integration-of-spark-streaming.html

Thanks

Presenters: @_satendrakumar

Organizer:
@knolspeak
http://www.knoldus.com