Storm 0.8.2

  • Published on
    10-May-2015

  • View
    3.051

  • Download
    2

Embed Size (px)

DESCRIPTION

original slides updated for STORM 0.8.2

Transcript

<ul><li>1.Slide updated for STORM 0.8.2STORMCOMPARISON INTRODUCTION - CONCEPTSPRESENTATION BY KASPER MADSENNOVEMBER - 2012</li></ul> <p>2. HADOOPVS STORM Batch processingReal-time processing Jobs runs to completion Topologies run forever JobTracker is SPOF*No single point of failure Stateful nodesStateless nodes Scalable Scalable Guarantees no data lossGuarantees no data loss Open sourceOpen source* Hadoop 0.21 added some checkpointing SPOF: Single Point Of Failure 3. COMPONENTS Nimbus daemon is comparable to Hadoop JobTracker. It is the master Supervisor daemon spawns workers, it is comparable to Hadoop TaskTracker Worker is spawned by supervisor, one per port defined in storm.yaml configuration Executor is spawned by worker, run as a thread Task is spawned by executors, run as a thread Zookeeper* is a distributed system, used to store metadata. Nimbus and Supervisor daemons are fail-fast and stateless. All state is kept in Zookeeper. Notice all communication between Nimbus and Supervisors are done through ZookeeperOn a cluster with 2k+1 zookeeper nodes, the systemcan recover when maximally k nodes fails.* Zookeeper is an Apache top-level project 4. EXECUTORSExecutor is a new abstraction Disassociate tasks of acomponent to #threads Allows dynamicallychanging #executors,without changing #tasks Makes elasticity muchsimpler, as semantics arekept valid (e.g. for agrouping) Enables elasticity in amulti-core environment 5. STREAMSStream is an unbounded sequence of tuples.Topology is a graph where each node is a spout or bolt, and the edges indicatewhich bolts are subscribing to which streams. A spout is a source of a stream A bolt is consuming a stream (possibly emits a new one)Subscribes: A An edge represents a grouping Emits: C Subscribes: C &amp; DSubscribes: A Source of stream A Emits: D Source of stream BSubscribes:A &amp; B 6. GROUPINGSEach spout or bolt are running X instances in parallel (called tasks).Groupings are used to decide which task in the subscribing bolt, the tuple is sent toShuffle grouping is a random groupingFields groupingis grouped by value, such that equal value results in equal taskAll grouping replicates to all tasksGlobal groupingmakes all tuples go to one taskNone groupingmakes bolt run in same thread as bolt/spout it subscribes toDirect groupingproducer (task that emits) controls which consumer will receive4 tasks 3 tasks2 tasks2 tasks 7. TestWordSpoutExclamationBolt ExclamationBoltEXAMPLE TopologyBuilder builder = new TopologyBuilder(); Create stream called wordsRun 10 tasks builder.setSpout("words", new TestWordSpout(), 10);Create stream called exclaim1 builder.setBolt("exclaim1", new ExclamationBolt(), 3)Run 3 tasksSubscribe to stream words, .shuffleGrouping("words"); using shufflegroupingCreate stream called exclaim2 builder.setBolt("exclaim2", new ExclamationBolt(), 2)Run 2 tasks .shuffleGrouping("exclaim1");Subscribe to stream exclaim1,using shufflegroupingA bolt can subscribe to an unlimited number ofstreams, by chaining groupings.The sourcecode for this example is part of the storm-starter project on github 8. TestWordSpoutExclamationBolt ExclamationBoltEXAMPLE 1TestWordSpoutpublic void nextTuple() { Utils.sleep(100); final String[] words = new String[] {"nathan", "mike", "jackson", "golda", "bertels"}; final Random rand = new Random(); final String word = words[rand.nextInt(words.length)]; _collector.emit(new Values(word));}The TestWordSpout emits a random string from the array words, each 100 milliseconds 9. TestWordSpoutExclamationBoltExclamationBoltEXAMPLE 2ExclamationBoltPrepare is called when bolt is createdOutputCollector _collector;public void prepare(Map conf, TopologyContext context, OutputCollector collector) {_collector = collector;} Execute is called for each tuplepublic void execute(Tuple tuple) { _collector.emit(tuple, new Values(tuple.getString(0) + "!!!")); _collector.ack(tuple); }declareOutputFields is called when bolt is createdpublic void declareOutputFields(OutputFieldsDeclarer declarer) { declarer.declare(new Fields("word"));}declareOutputFields is used to declare streams and their schemas. It is possible to declare several streams and specify the stream to use when outputting tuples in the emit function call. 10. TRIDENT TOPOLOGYTrident topology is a new abstraction built on top of STORM primitives Supports Joins Aggregations Grouping Functions Filters Easy to use, read the wiki Guarantees exactly-once processing - if using (opaque) transactional spout Some basic ideas are equal to the deprecated transactional topology* Tuples are processed as small batches Each batch gets a transaction id, if batch is replayed same txid is given State updates are strongly ordered among batches State updates atomically stores meta-data with data Transactional topology is superseded by the Trident topology from 0.8.0*see my first slide (march 2012) on STORM, for detailed information. www.slideshare.com/KasperMadsen 11. EXACTLY-ONCE-PROCESSING - 1Transactional spouts guarantees same data is replayed for every batchGuaranteeing exactly-once-processing for transactional spouts txid is stored with data, such that last txid that updated the data is known Information is used to know what to update in case of replayExample 1. Currently processing txid: 2, with data [man, dog, dog] 2. Current state is:man =&gt; [count=3, txid=1]dog =&gt; [count=2, txid=2] 3. Batch with txid 2, fails and gets replayed. 4. Resulting state isman =&gt; [count=4, txid=2]dog =&gt; [count=2, txid=2] 5. Because txid is stored with the data, it is known the count for dog shouldnot be increased again. 12. EXACTLY-ONCE-PROCESSING - 2Opaque transactional spout is not guaranteed to replay same data for a failedbatch, as originally existed in the batch. Guarantees every tuple is successfully processed in exactly one batch Useful for having exactly-once-processing and allowing some inputs to failGuaranteeing exactly-once-processing for opaque transactional spoutsSame trick doesnt work, as replayed batch might be changed, meaningsome state might now have stored incorrect data. Consider previousexample! Problem is solved by storing more meta-data with data (previous value)ExampleStepData Count prevValueTxid Updates dogcount then fails1 2 dog1 cat 2,1 0,01,12 1 dog2 cat 3,1 2,12,12.1 2 dog2 cat 4, 32,12,2 Consider the potential problems if theBatch contains new data, but updatesnew data for 2.1 doesnt contain any cat. ok as previous values are used 13. ELASTICITY Rebalancing workers and executors (not tasks) Pause spouts Wait for message timeout Set new assignment All moved tasks will be killed and restarted in new location Swapping (STORM 0.8.2) Submit inactive new topology Pause spouts of old topology Wait for message timeout of old topology Activate new topology Deactivate old topology Kill old topologyWhat about state on tasks which are killed and restarted? It is up to the user to solve! 14. LEARN MOREWebsite (http://storm-project.net/)Wiki (https://github.com/nathanmarz/storm/wiki)Storm-starter (https://github.com/nathanmarz/storm-starter)Mailing list (http://groups.google.com/group/storm-user)#storm-user room on freenodeUTSL: https://github.com/nathanmarz/stormMore slides: www.slideshare.net/KasperMadsen from: http://www.cupofjoe.tv/2010/11/learn-lesson.html </p>