Upload
datatorrent
View
73
Download
0
Embed Size (px)
Citation preview
2
Stream Processing
•Data from variety of sources (IoT, Kafka, files, social media etc.)•Unbounded, continuous data streams
•Batch can be processed as stream (but a stream is not a batch)
•(In-memory) Processing with temporal boundaries (windows)•Stateful operations: Aggregation, Rules, … -> Analytics•Results stored to variety of sinks or destinations
•Streaming application can also serve data with very low latency
Browser
Web Server
Kafka Input(logs)
Decompress, Parse, Filter
Dimensions Aggregate Kafka
LogsKafka
Apache Apex
• Stream processing platform• In memory, distributed
• Simple programming model• Write your own custom logic, pipelining
• Scalable• High throughput, low latency• Dynamic scaling responding to SLA
• Fault tolerant• Node outages, hadoop outages• Stateful recovery, incremental recovery• End to end exactly once
• Productivity library• Commonly needed connectors, business logic• Production tested
• Operability – DataTorrent RTS• Deployment and monitoring console• Deep introspection and debugging
Apache Apex and DataTorrent Product StackDesigned to help you at every stage of your data-in-motion pipeline
Solutions for Business Problems
Ingestion & Data Prep ETL Pipelines
Ease of Use Tools Real-Time Data VisualizationManagement & MonitoringGUI Application
Assembly
Application Templates
Apex-Malhar Operator Library
Big Data Infrastructure Hadoop 2.x – YARN + HDFS – On Prem & Cloud
Core
High-level APITransformation ML & Score SQL Analytic
s
FileSync
Dev Framework
Batch Support
Apache Apex Core
Kafka HDFS
HDFS HDFS JDBC HDFS JDBC
Kafka
5
Application Development Model
A Stream is a sequence of data tuplesA typical Operator takes one or more input streams, performs computations & emits one or more output streams
• Each Operator is YOUR custom business logic in java, or built-in operator from our open source library• Operator has many instances that run in parallel and each instance is single-threaded
Directed Acyclic Graph (DAG) is made up of operators and streams
Directed Acyclic Graph (DAG)
Filtered
Stream
Output StreamTuple Tuple
Filtered Stream
Enriched Stream
Enriched
Stream
er
Operator
er
Operator
er
Operator
er
Operator
er
Operator
er
Operator
6
Native Hadoop Integration
• YARN is the resource manager
• HDFS for storing persistent state
7
Sample Application
KafkaInput Parser Word
CounterDatabaseOutput
CountsWordsLinesKafka Database
Apex Application
• Design and develop operators or use existing ones from the library• Connect operators to form an Application• Configure operators• Configure scaling and other platform attributes• Test functionality, performance and iterate
Filter
Filtered
Apex Malhar - Production Ready Operators
RDBMS• Vertica• MySQL• Oracle• JDBC
NoSQL• Cassandra• Hbase• MongoDB• Aerospike, Accumulo• Couchbase/ CouchDB• Redis
Messaging• Kafka• Solace• Flume• ActiveMQ• Kinesis
File Systems• HDFS/ Hive• NFS• S3
Parsers• XML • JSON• CSV• Avro• Parquet
Transformations• Filters• Rules• Expression• Dedup• Enrich
Analytics• Dimensional Aggregations
Protocols• HTTP• FTP• WebSocket• MQTT• SMTP
Other• Elastic Search• Script (JavaScript, Python,
R)• Geode• Nifi• Twitter
Operator Maturity Framework
Scalability and Idempotency
Documentation
Example Applications
Internal Certification
Benchmarking
9
Filter Operator
• Removes articles
10
Application Definition
Instantiate operators Connect operators by connecting output port to input port
• Multiple ways• Compositional, High Level with Windowing, Beam, Stream SQL, JSON, Property file
• Compositional example
11
Scalability
0
Unifier
1a
1b
1c
2a
2b
Unifier 3
Unifier
0 4
3a2a1a
1b 2b 3b
Unifier
uopr1
uopr2
uopr3
uopr4
doprunifier
unifier
unifier
Container
Container
NIC
NIC
NIC
NIC
NIC
Container
Partitioning & Unification
Cascade Unifier
Parallel Partitioning
12
Dynamic Scaling
• Partitioning change while application is runningᵒ Change number of partitions at runtime based on statsᵒ Determine initial number of partitions dynamically
• Kafka operators scale according to number of kafka partitionsᵒ Supports re-distribution of state when number of partitions changeᵒ API for custom scaler or partitioner
2b
2c
3
2a
2d
1b
1a1a 2a
1b 2b
3
1a 2b
1b 2c 3b
2a
2d
3a
Unifiers not shown
13
Runtime topology
14
Fault Tolerance & Checkpointing
Application window Sliding window and tumbling window
Checkpoint window No artificial latency
• In-memory PubSub• Stores results emitted by operator until committed• Handles backpressure / spillover to local disk• Ordering, idempotency
Operator 1
Container 1
BufferServer
Node 1
Operator 2
Container 2
Node 2
Buffer Server
15
16
End-to-End Exactly Once
•Important when writing to external systems•Data should not be duplicated or lost in the external system in case of application failures•Common external systems
•Databases•Files•Message queues
•Exactly-once = at-least-once + idempotency + consistent state•Data duplication must be avoided when data is replayed from checkpoint
•Operators implement the logic dependent on the external system•Platform provides checkpointing and repeatable windowing
17
Monitoring Console•Logical View Physical View
18
Real-Time Dashboards
Application Designer
AppHub – App Template Repository
• Application repository that serves demos and applications that are backed by DataTorrent
• Provides version updates via dtManage
• Future Opportunities• Apps on Cloud Big Data
environments such as Azure, EMR
• Marketplace for applications created by the community
GE powers Industrial IoT applications with DataTorrent
18 | © 2016 DataTorrent Confidential – Do Not Distribute
GE is dedicated to providing advanced IoT analytics solutions to thousands of customers who are using their devices and sensors across different verticals. GE has built a sophisticated analytics platform, Predix, to help its customers develop and execute Industrial IoT applications and gain real-time insights as well as actions.
Business Need DataTorrent Solution Client Outcome• Ingest and analyze high-volume, high
speed data from thousands of devices, sensors per customer in real-time without data loss
• Predictive analytics to reduce costly maintenance and improve customer service
• Unified monitoring of all connected sensors and devices to minimize disruptions
• Fast application development cycle
• High scalability to meet changing business and application workloads
• Built an ingestion application using DataTorrent Enterprise platform
• Powered by Apache Apex• In-memory stream processing• Built-in fault tolerance• Dynamic scalability• Comprehensive library of pre-
built operators including connectors
• Management UI console
• Helps GE improve performance and lower costs
• Helps GE detect possible failures and minimize unplanned downtimes with easy, centralized management & monitoring of devices
• Enables faster innovation with short application development cycle
• No data loss and 24x7 availability of applications
• Helps GE easily adjust to scalability needs with auto-scaling
CapitalOne prevents fraudulent credit card transactions in real-time with DataTorrent
18 | © 2016 DataTorrent Confidential – Do Not Distribute
Capital One is the eighth-largest bank holding company in the US and specializes in credit cards, home loans, auto loans, banking and savings products. It is using Apache Apex to build their next generation decisioning platform, achieving an ultra-low latency of under 2 ms for decision making, and handling 2,000 events burst at a net rate of 70,000 events/sec.
Business Need DataTorrent Solution• Replaces existing mainframe system
for CapitalOne
• No data loss and 24x7 availability of applications
• Easy integration with the existing infrastructure HDFS, messaging queues & in-memory data grids
• Achieved 0.25 ms latency (on average), 2 ms latency (99.99%), 16 ms (99.9999).
Client Outcome• Decline fraudulent credit card
transactions• Open Source • Decision to authorize or decline a
transaction in real time - SLA is 40 ms (ideally 15 ms)
• Process 2,000 events/ 10ms• Transactions may be declined based
on • a transaction attribute or several
attributes &• dynamically changing rules (e.g.
an account is disabled or an account limit is reached)
• DataTorrent 2.0 Enterprise platform, powered by Apache Apex
• In-memory stream processing • Comprehensive library of pre-
built operators including connectors
• The only enterprise ready solution
• Built-in fault tolerance• Dynamically scalable• Use precomputed ML model for
scoring• Management UI & Data
Visualization console
23
Resources for the use cases
•GE•https://www.youtube.com/watch?v=hmaSkXhHNu0•http://www.slideshare.net/ApacheApex/ge-iot-predix-time-series-data-ingestion-service-using-apache-apex-hadoop
•Capitol One•https://www.youtube.com/watch?v=98EW5NGM3u0•http://www.slideshare.net/ApacheApex/capital-ones-next-generation-decision-in-less-than-2-ms
24
Resources• http://apex.apache.org/• Learn more: http://apex.apache.org/docs.html • Subscribe - http://apex.apache.org/community.html• Download - http://apex.apache.org/downloads.html• Follow @ApacheApex - https://twitter.com/apacheapex• Meetups – http://www.meetup.com/pro/apacheapex/• More examples: https://github.com/DataTorrent/examples• Slideshare:
http://www.slideshare.net/ApacheApex/presentations• https://www.youtube.com/results?search_query=apache+ape
x• Free Enterprise License for Startups -
https://www.datatorrent.com/product/startup-accelerator/
25
Q&A
Thank you & enjoy the conference
EXTRA
27
Apache Apex• In-memory, distributed stream processing
• Application logic broken into components called operators that run in a distributed fashion across your cluster
• Natural programming model• Unobtrusive Java API to express (custom) logic• Maintain state and metrics in your member variables
• Scalable, high throughput, low latency• Operators can be scaled up or down at runtime according to the load and SLA• Dynamic scaling (elasticity), compute locality
• Fault tolerance & correctness• Automatically recover from node outages without having to reprocess from
beginning• State is preserved, checkpointing, incremental recovery• End-to-end exactly-once
• Operability• System and application metrics, record/visualize data• Dynamic changes
Exactly Once - Files
28
File Data
Offset
• Operator saves file offset during checkpoint
• File contents are flushed before checkpoint to ensure there is no pending data in buffer
• On recovery platform restores the file offset value from checkpoint
• Operator truncates the file to the offset
• Starts writing data again• Ensures no data is duplicated or lost
Chk
Exactly Once - Databases
29
d11 d12 d13
d21 d22 d23
lwn1 lwn2 lwn3
op-id wn
chk wn wn+1
Lwn+11 Lwn+12 Lwn+13
op-id wn+1
Data TableMeta Table
• Data in a window is written out in a single transaction
• Window id is also written to a meta table as part of the same transaction
• Operator reads the window id from meta table on recovery
• Ignores data for windows less than the recovered window id and writes new data
• Partial window data before failure will not appear in data table as transaction was not committed
• Assumes idempotency for replay
Application Specification (Java)
30
Java Stream API (declarative)
DAG API (compositional)
Java Streams API + Windowing
31
Next Release (3.5): Support for Windowing à la Apache Beam (incubating):
@ApplicationAnnotation(name = "WordCountStreamingApiDemo")public class ApplicationWithStreamAPI implements StreamingApplication{ @Override public void populateDAG(DAG dag, Configuration configuration) { String localFolder = "./src/test/resources/data"; ApexStream<String> stream = StreamFactory .fromFolder(localFolder) .flatMap(new Split()) .window(new WindowOption.GlobalWindow(), new TriggerOption().withEarlyFiringsAtEvery(Duration.millis(1000)).accumulatingFiredPanes()) .countByKey(new ConvertToKeyVal()).print(); stream.populateDag(dag); }}
Writing an Operator
32
Example application - WordCount
33
• Kafka to Mysql• Streaming source• Traditional database output• Functionality
- Stream messages containing string lines- Break the lines into words on space and punctuation boundaries- Count unique words- Store counts in database
• Four operators• Kafka Input to stream messages from Kafka• Parser to break lines into words• Counter to count occurrence of each word• Database output to write counts to database
Attributes
34
• Attributes are platform features and apply to all operators