Upload
abhinav-singh
View
2.605
Download
1
Embed Size (px)
Citation preview
Apache SparkStreaming Processing with Kafka
Please introduce yourselves using the Q&A window that appears on the right while others join us.
● Session - 3 hours Duration (due to high demand increased it from 2:30 hrs)● First Half: Apache Spark Introduction & Streaming basics● 10 mins. break● Second Half: Hands-on demo using CloudxLab
● Session is being recorded. Recording & presentation will be shared after the session
● Asking Questions?● Every one except the instructor is muted● Please ask questions by typing in the Q&A window (requires logging
in to google+)● Instructor will read out the question before answering● To get better answers, keep your messages short and avoid chat
language
WELCOME TO THE SESSION
WELCOME TO CLOUDxLAB SESSION
A cloud based lab forstudents to gain hands-on experience in Big Data Technologies
such as Hadoop and Spark
● Learn Through Practice
● Real Environment
● Connect From Anywhere
● Connect From Any Device
● Centralized Data sets
● No Installation
● No Compatibility Issues
● 24x7 Support
TODAY’S AGENDA
I Introduction to Apache SparkII Introduction to stream processingIII Understanding RDD (Resilient Distributed Datasets)IV Understanding DstreamV Kafka IntroductionVI Understanding Stream Processing flowVII Real time Hands-on using CloudxLabVIII Questions and Answers
About Instructor?
2015 CloudxLab Founded2014 KnowBigData Founded2014
Amazon Built High Throughput Systems for Amazon.com site using in-house NoSql.
20122012 InMobi Built Recommender that churns 200 TB2011
tBits Global Founded tBits GlobalBuilt an enterprise grade Document Management System
2006
D.E.Shaw Built the big data systems before the term was coined
20022002 IIT Roorkee Finished B.Tech.
Apache
A fast and general engine for large-scale data processing.
● Really fast MapReduce
● 100x faster than Hadoop MapReduce in memory,
● 10x faster on disk.
● Builds on similar paradigms as MapReduce
● Integrated with Hadoop
SPARK STREAMING
Extension of the core Spark API: high-throughput, fault-tolerant
Input Sources Output
SPARK STREAMING
Workflow
• Spark Streaming receives live input data streams
• Divides the data into batches
• Spark Engine generates the final stream of results in batches.
Provides a discretized stream or DStream - a continuous stream of data.
SPARK STREAMING - DSTREAMInternally represented using RDD
Each RDD in a DStream contains data from a certain interval.
SPARK STREAMING - EXAMPLE
Problem: do the word count every second.Step 1: Create a connection to the service
from pyspark import SparkContextfrom pyspark.streaming import StreamingContext
# Create a local StreamingContext with two working thread and # batch interval of 1 secondsc = SparkContext("local[2]", "NetworkWordCount")ssc = StreamingContext(sc, 1)# Create a DStream that will connect to hostname:port, # like localhost:9999lines = ssc.socketTextStream("localhost", 9999)
SPARK STREAMING - EXAMPLE
Step 2: Split each line into words, convert to tuple and then count.
# Split each line into wordswords = lines.flatMap(lambda line: line.split(" "))
# Count each word in each batchpairs = words.map(lambda word: (word, 1))
#Do the countwordCounts = pairs.reduceByKey(lambda x, y: x + y)
Problem: do the word count every second.
SPARK STREAMING - EXAMPLE
Step 3: Print the stream. It is a periodic event
# Print the first ten elements of each RDD generated # in this DStream to the consolewordCounts.pprint()
Problem: do the word count every second.
SPARK STREAMING - EXAMPLE
Step 4: Every Thing Setup: Lets Start
# Start the computationssc.start()
# Wait for the computation to terminatessc.awaitTermination()
Problem: do the word count every second.
SPARK STREAMING - EXAMPLEProblem: do the word count every second.
spark-submit spark_streaming_ex.py
2>/dev/null
(Also available in HDFS at /data/spark)
nc -l 9999
SPARK STREAMING - EXAMPLEProblem: do the word count every second.
SPARK STREAMING - EXAMPLEProblem: do the word count every second.
spark-submit spark_streaming_ex.py
2>/dev/nullyes|nc -l 9999
Spark Streaming + Kafka Integration
Apache Kafka● Publish-subscribe messaging● A distributed, partitioned, replicated commit log service.
Spark Streaming + Kafka Integration
Prerequisites● Zookeeper● Kafka● Spark● All of above are installed by Ambari with HDP (CloudxLab)● Kafka Library - you need to download from maven
○ also available in /data/spark
Spark Streaming + Kafka Integration
Step 1: Download the spark assembly from here. Include essentials
from __future__ import print_functionfrom pyspark import SparkContextfrom pyspark.streaming import StreamingContextfrom pyspark.streaming.kafka import KafkaUtilsimport sys
Problem: do the word count every second from kafka
Spark Streaming + Kafka Integration
Step 2: Create the streaming objects
Problem: do the word count every second from kafka
sc = SparkContext(appName="KafkaWordCount")ssc = StreamingContext(sc, 1)
#Read name of zk from argumentszkQuorum, topic = sys.argv[1:]
#Listen to the topickvs = KafkaUtils.createStream(ssc, zkQuorum, "spark-streaming-consumer", {topic: 1})
Spark Streaming + Kafka Integration
Step 3: Create the RDDs by Transformations & Actions
Problem: do the word count every second from kafka
#read lines from streamlines = kvs.map(lambda x: x[1])
# Split lines into words, words to tuples, reducecounts = lines.flatMap(lambda line: line.split(" ")) \.map(lambda word: (word, 1)) \.reduceByKey(lambda a, b: a+b)
#Do the printcounts.pprint()
Spark Streaming + Kafka Integration
Step 4: Start the process
Problem: do the word count every second from kafka
ssc.start() ssc.awaitTermination()
Spark Streaming + Kafka Integration
Step 5: Create the topic
Problem: do the word count every second from kafka
#Login via ssh or Consolessh [email protected]# Add following into pathexport PATH=$PATH:/usr/hdp/current/kafka-broker/bin
#Create the topickafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
#Check if createdkafka-topics.sh --list --zookeeper localhost:2181
Spark Streaming + Kafka Integration
Step 6: Create the producer
# find the ip address of any broker from zookeeper-client using command get /brokers/id/0
kafka-console-producer.sh --broker-list ip-172-31-13-154.ec2.internal:6667 --topic session2
#Test if producing by consuming in another terminalkafka-console-consumer.sh --zookeeper localhost:2181 --topic session2 --from-beginning
#Produce a lotyes|kafka-console-producer.sh --broker-list ip-172-31-13-154.ec2.internal:6667 --topic test
Problem: do the word count every second from kafka
Spark Streaming + Kafka Integration
Step 7: Do the stream processing. Check the graphs at :4040/
Problem: do the word count every second from kafka
(spark-submit --jars spark-streaming-kafka-assembly_2.10-1.6.0.jar kafka_wordcount.py localhost:2181 session2) 2>/dev/null
● The updateStateByKey operation allows you to maintain arbitrary state while continuously updating it with new information.
● To use this, you will have to do two steps.○ Define the state - The state can be an arbitrary data type.○ Define the state update function - Specify with a function how to update the state
using the previous state and the new values from an input stream● In every batch, Spark will apply the state update function for all existing keys, regardless
of whether they have new data in a batch or not. ● If the update function returns None then the key-value pair will be eliminated
UpdateStateByKey OperationCompute Aggregation across whole day
UpdateStateByKey Operation
def updateFunction(newValues, runningCount): if runningCount is None: runningCount = 0
# add the new values with the previous running count # to get the new count
return sum(newValues, runningCount)
runningCounts = pairs.updateStateByKey(updateFunction)
Objective: Maintain a running count of each word seen in a text data stream.The running count is the state and it is an integer
Apache Spark
Thank you.
[email protected] +1 412 568 3901 (US)
+91 803 951 3513 (IN)
Subscribe to our Youtube channel for latest videos - https://www.youtube.com/channel/UCxugRFe5wETYA7nMH6VGyEA