List Tutorial Instal Hadoop Kedua

Embed Size (px)

DESCRIPTION

Tutorial Install Hadoop

Citation preview

= 1015/06/15 15:23:15 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,2515/06/15 15:23:15 INFO namenode.FSNamesystem: Retry cache on namenode is enabled15/06/15 15:23:15 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis15/06/15 15:23:15 INFO util.GSet: Computing capacity for map NameNodeRetryCache15/06/15 15:23:15 INFO util.GSet: VM type = 64-bit15/06/15 15:23:15 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB15/06/15 15:23:15 INFO util.GSet: capacity = 2^15 = 32768 entries15/06/15 15:23:15 INFO namenode.FSImage: Allocated new BlockPoolId: BP-839127011-127.0.1.1-143435299566115/06/15 15:23:15 INFO common.Storage: Storage directory /usr/local/hadoop_store/hdfs/namenode has been successfully formatted.15/06/15 15:23:16 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 015/06/15 15:23:16 INFO util.ExitUtil: Exiting with status 015/06/15 15:23:16 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at midarto-ThinkPad-Edge-E130/127.0.1.1************************************************************/hduser@midarto-ThinkPad-Edge-E130:~$ cd /usr/local/sbin/hduser@midarto-ThinkPad-Edge-E130:/usr/local/sbin$ cdhduser@midarto-ThinkPad-Edge-E130:~$ sudo chmod -R 777 /usr/llib/ local/ hduser@midarto-ThinkPad-Edge-E130:~$ sudo chmod -R 777 /usr/local/sbin/hduser@midarto-ThinkPad-Edge-E130:~$ cd /usr/local/sbin/hduser@midarto-ThinkPad-Edge-E130:/usr/local/sbin$ lshduser@midarto-ThinkPad-Edge-E130:/usr/local/sbin$ -lsNo command '-ls' found, did you mean:Command 'ils' from package 'sleuthkit' (universe)Command 'tls' from package 'python-tlslite' (universe)Command 'hls' from package 'hfsutils' (main)Command 'ls' from package 'coreutils' (main)Command 'fls' from package 'sleuthkit' (universe)Command 'jls' from package 'sleuthkit' (universe)Command 'bls' from package 'bacula-sd' (main)Command 'als' from package 'atool' (universe)Command 'ols' from package 'speech-tools' (universe)Command 'i-ls' from package 'integrit' (universe)-ls: command not foundhduser@midarto-ThinkPad-Edge-E130:/usr/local/sbin$ stastart start-pulseaudio-x11 static-shstartpar start-stop-daemon statusstartpar-upstart-inject startx start-pulseaudio-kde stat hduser@midarto-ThinkPad-Edge-E130:/usr/local/sbin$ stastart start-pulseaudio-x11 static-shstartpar start-stop-daemon statusstartpar-upstart-inject startx start-pulseaudio-kde stat hduser@midarto-ThinkPad-Edge-E130:/usr/local/sbin$ startstart start-pulseaudio-kde startxstartpar start-pulseaudio-x11 startpar-upstart-inject start-stop-daemon hduser@midarto-ThinkPad-Edge-E130:/usr/local/sbin$ cdhduser@midarto-ThinkPad-Edge-E130:~$ cd /usr/local/hadoophadoop/ hadoop_store/ hduser@midarto-ThinkPad-Edge-E130:~$ cd /usr/local/hadoop/bin/ etc/ include/ lib/ libexec/ sbin/ share/ hduser@midarto-ThinkPad-Edge-E130:~$ cd /usr/local/hadoop/sbin/hduser@midarto-ThinkPad-Edge-E130:/usr/local/hadoop/sbin$ lsdistribute-exclude.sh start-all.cmd stop-balancer.shhadoop-daemon.sh start-all.sh stop-dfs.cmdhadoop-daemons.sh start-balancer.sh stop-dfs.shhdfs-config.cmd start-dfs.cmd stop-secure-dns.shhdfs-config.sh start-dfs.sh stop-yarn.cmdhttpfs.sh start-secure-dns.sh stop-yarn.shkms.sh start-yarn.cmd yarn-daemon.shmr-jobhistory-daemon.sh start-yarn.sh yarn-daemons.shrefresh-namenodes.sh stop-all.cmdslaves.sh stop-all.shhduser@midarto-ThinkPad-Edge-E130:/usr/local/hadoop/sbin$ start-all.shbash: /usr/local/hadoop/sbin/start-all.sh: Permission deniedhduser@midarto-ThinkPad-Edge-E130:/usr/local/hadoop/sbin$ cdhduser@midarto-ThinkPad-Edge-E130:~$ sudo chmod -R 777 /usr/local/hadoop/sbin/hduser@midarto-ThinkPad-Edge-E130:~$ cd /usr/local/hadoop/sbin/hduser@midarto-ThinkPad-Edge-E130:/usr/local/hadoop/sbin$ strstrace strings strip hduser@midarto-ThinkPad-Edge-E130:/usr/local/hadoop/sbin$ start-all.shThis script is Deprecated. Instead use start-dfs.sh and start-yarn.sh15/06/15 15:26:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableStarting namenodes on [localhost]localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-midarto-ThinkPad-Edge-E130.outlocalhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-midarto-ThinkPad-Edge-E130.outStarting secondary namenodes [0.0.0.0]The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.ECDSA key fingerprint is d0:33:ed:28:d4:55:e7:f0:32:e8:26:be:92:07:fe:fa.Are you sure you want to continue connecting (yes/no)? yes0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-midarto-ThinkPad-Edge-E130.out15/06/15 15:30:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicablestarting yarn daemonsstarting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-midarto-ThinkPad-Edge-E130.outlocalhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-midarto-ThinkPad-Edge-E130.outhduser@midarto-ThinkPad-Edge-E130:/usr/local/hadoop/sbin$ jps4053 NodeManager3376 DataNode3724 ResourceManager3576 SecondaryNameNode3215 NameNode4156 Jpshduser@midarto-ThinkPad-Edge-E130:/usr/local/hadoop/sbin$ cdhduser@midarto-ThinkPad-Edge-E130:~$ hadoop jar /usr/local/hadoop/bin/ include/ libexec/ logs/ README.txt share/etc/ lib/ LICENSE.txt NOTICE.txt sbin/ hduser@midarto-ThinkPad-Edge-E130:~$ hadoop jar /usr/local/hadoop/share/hadoop/common/ hdfs/ httpfs/ kms/ mapreduce/ tools/ yarn/hduser@midarto-ThinkPad-Edge-E130:~$ hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.0.jarhadoop-mapreduce-client-common-2.7.0.jarhadoop-mapreduce-client-core-2.7.0.jarhadoop-mapreduce-client-hs-2.7.0.jarhadoop-mapreduce-client-hs-plugins-2.7.0.jarhadoop-mapreduce-client-jobclient-2.7.0.jarhadoop-mapreduce-client-jobclient-2.7.0-tests.jarhadoop-mapreduce-client-shuffle-2.7.0.jarhadoop-mapreduce-examples-2.7.0.jarlib/lib-examples/sources/hduser@midarto-ThinkPad-Edge-E130:~$ hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0.jar pi 2 5Number of Maps = 2Samples per Map = 515/06/15 15:32:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableWrote input for Map #0Wrote input for Map #1Starting Job15/06/15 15:32:32 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id15/06/15 15:32:32 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=15/06/15 15:32:32 INFO input.FileInputFormat: Total input paths to process : 215/06/15 15:32:32 INFO mapreduce.JobSubmitter: number of splits:215/06/15 15:32:33 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1644526633_000115/06/15 15:32:33 INFO mapreduce.Job: The url to track the job: http://localhost:8080/15/06/15 15:32:33 INFO mapreduce.Job: Running job: job_local1644526633_000115/06/15 15:32:33 INFO mapred.LocalJobRunner: OutputCommitter set in config null15/06/15 15:32:33 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 115/06/15 15:32:33 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter15/06/15 15:32:33 INFO mapred.LocalJobRunner: Waiting for map tasks15/06/15 15:32:33 INFO mapred.LocalJobRunner: Starting task: attempt_local1644526633_0001_m_000000_015/06/15 15:32:33 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 115/06/15 15:32:33 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]15/06/15 15:32:33 INFO mapred.MapTask: Processing split: hdfs://localhost:54310/user/hduser/QuasiMonteCarlo_1434353547302_418935020/in/part0:0+11815/06/15 15:32:33 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)15/06/15 15:32:33 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 10015/06/15 15:32:33 INFO mapred.MapTask: soft limit at 8388608015/06/15 15:32:33 INFO mapred.MapTask: bufstart = 0; bufvoid = 10485760015/06/15 15:32:33 INFO mapred.MapTask: kvstart = 26214396; length = 655360015/06/15 15:32:33 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer15/06/15 15:32:34 INFO mapred.LocalJobRunner: 15/06/15 15:32:34 INFO mapred.MapTask: Starting flush of map output15/06/15 15:32:34 INFO mapred.MapTask: Spilling map output15/06/15 15:32:34 INFO mapred.MapTask: bufstart = 0; bufend = 18; bufvoid = 10485760015/06/15 15:32:34 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/655360015/06/15 15:32:34 INFO mapred.MapTask: Finished spill 015/06/15 15:32:34 INFO mapred.Task: Task:attempt_local1644526633_0001_m_000000_0 is done. And is in the process of committing15/06/15 15:32:34 INFO mapred.LocalJobRunner: map15/06/15 15:32:34 INFO mapred.Task: Task 'attempt_local1644526633_0001_m_000000_0' done.15/06/15 15:32:34 INFO mapred.LocalJobRunner: Finishing task: attempt_local1644526633_0001_m_000000_015/06/15 15:32:34 INFO mapred.LocalJobRunner: Starting task: attempt_local1644526633_0001_m_000001_015/06/15 15:32:34 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 115/06/15 15:32:34 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]15/06/15 15:32:34 INFO mapred.MapTask: Processing split: hdfs://localhost:54310/user/hduser/QuasiMonteCarlo_1434353547302_418935020/in/part1:0+11815/06/15 15:32:34 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)15/06/15 15:32:34 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 10015/06/15 15:32:34 INFO mapred.MapTask: soft limit at 8388608015/06/15 15:32:34 INFO mapred.MapTask: bufstart = 0; bufvoid = 10485760015/06/15 15:32:34 INFO mapred.MapTask: kvstart = 26214396; length = 655360015/06/15 15:32:34 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer15/06/15 15:32:34 INFO mapred.LocalJobRunner: 15/06/15 15:32:34 INFO mapred.MapTask: Starting flush of map output15/06/15 15:32:34 INFO mapred.MapTask: Spilling map output15/06/15 15:32:34 INFO mapred.MapTask: bufstart = 0; bufend = 18; bufvoid = 10485760015/06/15 15:32:34 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/655360015/06/15 15:32:34 INFO mapred.MapTask: Finished spill 015/06/15 15:32:34 INFO mapred.Task: Task:attempt_local1644526633_0001_m_000001_0 is done. And is in the process of committing15/06/15 15:32:34 INFO mapred.LocalJobRunner: map15/06/15 15:32:34 INFO mapred.Task: Task 'attempt_local1644526633_0001_m_000001_0' done.15/06/15 15:32:34 INFO mapred.LocalJobRunner: Finishing task: attempt_local1644526633_0001_m_000001_015/06/15 15:32:34 INFO mapred.LocalJobRunner: map task executor complete.15/06/15 15:32:34 INFO mapred.LocalJobRunner: Waiting for reduce tasks15/06/15 15:32:34 INFO mapred.LocalJobRunner: Starting task: attempt_local1644526633_0001_r_000000_015/06/15 15:32:34 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 115/06/15 15:32:34 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]15/06/15 15:32:34 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@43f66bae15/06/15 15:32:34 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=333971456, maxSingleShuffleLimit=83492864, mergeThreshold=220421168, ioSortFactor=10, memToMemMergeOutputsThreshold=1015/06/15 15:32:34 INFO reduce.EventFetcher: attempt_local1644526633_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events15/06/15 15:32:34 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1644526633_0001_m_000001_0 decomp: 24 len: 28 to MEMORY15/06/15 15:32:34 INFO reduce.InMemoryMapOutput: Read 24 bytes from map-output for attempt_local1644526633_0001_m_000001_015/06/15 15:32:34 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 24, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->2415/06/15 15:32:34 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1644526633_0001_m_000000_0 decomp: 24 len: 28 to MEMORY15/06/15 15:32:34 INFO reduce.InMemoryMapOutput: Read 24 bytes from map-output for attempt_local1644526633_0001_m_000000_015/06/15 15:32:34 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 24, inMemoryMapOutputs.size() -> 2, commitMemory -> 24, usedMemory ->4815/06/15 15:32:34 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning15/06/15 15:32:34 INFO mapred.LocalJobRunner: 2 / 2 copied.15/06/15 15:32:34 INFO reduce.MergeManagerImpl: finalMerge called with 2 in-memory map-outputs and 0 on-disk map-outputs15/06/15 15:32:34 INFO mapred.Merger: Merging 2 sorted segments15/06/15 15:32:34 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 42 bytes15/06/15 15:32:34 INFO reduce.MergeManagerImpl: Merged 2 segments, 48 bytes to disk to satisfy reduce memory limit15/06/15 15:32:34 INFO reduce.MergeManagerImpl: Merging 1 files, 50 bytes from disk15/06/15 15:32:34 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce15/06/15 15:32:34 INFO mapred.Merger: Merging 1 sorted segments15/06/15 15:32:34 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 43 bytes15/06/15 15:32:34 INFO mapred.LocalJobRunner: 2 / 2 copied.15/06/15 15:32:34 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords15/06/15 15:32:34 INFO mapreduce.Job: Job job_local1644526633_0001 running in uber mode : false15/06/15 15:32:34 INFO mapreduce.Job: map 100% reduce 0%15/06/15 15:32:34 INFO mapred.Task: Task:attempt_local1644526633_0001_r_000000_0 is done. And is in the process of committing15/06/15 15:32:34 INFO mapred.LocalJobRunner: 2 / 2 copied.15/06/15 15:32:34 INFO mapred.Task: Task attempt_local1644526633_0001_r_000000_0 is allowed to commit now15/06/15 15:32:35 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1644526633_0001_r_000000_0' to hdfs://localhost:54310/user/hduser/QuasiMonteCarlo_1434353547302_418935020/out/_temporary/0/task_local1644526633_0001_r_00000015/06/15 15:32:35 INFO mapred.LocalJobRunner: reduce > reduce15/06/15 15:32:35 INFO mapred.Task: Task 'attempt_local1644526633_0001_r_000000_0' done.15/06/15 15:32:35 INFO mapred.LocalJobRunner: Finishing task: attempt_local1644526633_0001_r_000000_015/06/15 15:32:35 INFO mapred.LocalJobRunner: reduce task executor complete.15/06/15 15:32:35 INFO mapreduce.Job: map 100% reduce 100%15/06/15 15:32:35 INFO mapreduce.Job: Job job_local1644526633_0001 completed successfully15/06/15 15:32:35 INFO mapreduce.Job: Counters: 35File System CountersFILE: Number of bytes read=822302FILE: Number of bytes written=1648559FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=590HDFS: Number of bytes written=923HDFS: Number of read operations=30HDFS: Number of large read operations=0HDFS: Number of write operations=15Map-Reduce FrameworkMap input records=2Map output records=4Map output bytes=36Map output materialized bytes=56Input split bytes=296Combine input records=0Combine output records=0Reduce input groups=2Reduce shuffle bytes=56Reduce input records=4Reduce output records=0Spilled Records=8Shuffled Maps =2Failed Shuffles=0Merged Map outputs=2GC time elapsed (ms)=0Total committed heap usage (bytes)=854065152Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0File Input Format Counters Bytes Read=236File Output Format Counters Bytes Written=97Job Finished in 3.39 secondsEstimated value of Pi is 3.60000000000000000000hduser@midarto-ThinkPad-Edge-E130:~$ mkdir cobahduser@midarto-ThinkPad-Edge-E130:~$ cd coba/hduser@midarto-ThinkPad-Edge-E130:~/coba$ nano coba.txthduser@midarto-ThinkPad-Edge-E130:~/coba$ hadoop dfs -copyFromLocal /home/hduser/coba/ cobaDEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.

15/06/15 15:34:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicablehduser@midarto-ThinkPad-Edge-E130:~/coba$ hdfs dfs -ls15/06/15 15:34:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableFound 1 itemsdrwxr-xr-x - hduser supergroup 0 2015-06-15 15:34 cobahduser@midarto-ThinkPad-Edge-E130:~/coba$ hadoop jar /usr/local/hadoop/share/hadoop/common/ hdfs/ httpfs/ kms/ mapreduce/ tools/ yarn/hduser@midarto-ThinkPad-Edge-E130:~/coba$ hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0.jar wordcount coba coba-out15/06/15 15:35:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable15/06/15 15:35:27 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id15/06/15 15:35:27 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=15/06/15 15:35:27 INFO input.FileInputFormat: Total input paths to process : 115/06/15 15:35:27 INFO mapreduce.JobSubmitter: number of splits:115/06/15 15:35:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1075455800_000115/06/15 15:35:28 INFO mapreduce.Job: The url to track the job: http://localhost:8080/15/06/15 15:35:28 INFO mapreduce.Job: Running job: job_local1075455800_000115/06/15 15:35:28 INFO mapred.LocalJobRunner: OutputCommitter set in config null15/06/15 15:35:28 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 115/06/15 15:35:28 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter15/06/15 15:35:28 INFO mapred.LocalJobRunner: Waiting for map tasks15/06/15 15:35:28 INFO mapred.LocalJobRunner: Starting task: attempt_local1075455800_0001_m_000000_015/06/15 15:35:28 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 115/06/15 15:35:28 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]15/06/15 15:35:28 INFO mapred.MapTask: Processing split: hdfs://localhost:54310/user/hduser/coba/coba.txt:0+3715/06/15 15:35:28 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)15/06/15 15:35:28 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 10015/06/15 15:35:28 INFO mapred.MapTask: soft limit at 8388608015/06/15 15:35:28 INFO mapred.MapTask: bufstart = 0; bufvoid = 10485760015/06/15 15:35:28 INFO mapred.MapTask: kvstart = 26214396; length = 655360015/06/15 15:35:28 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer15/06/15 15:35:28 INFO mapred.LocalJobRunner: 15/06/15 15:35:28 INFO mapred.MapTask: Starting flush of map output15/06/15 15:35:28 INFO mapred.MapTask: Spilling map output15/06/15 15:35:28 INFO mapred.MapTask: bufstart = 0; bufend = 53; bufvoid = 10485760015/06/15 15:35:28 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214384(104857536); length = 13/655360015/06/15 15:35:28 INFO mapred.MapTask: Finished spill 015/06/15 15:35:28 INFO mapred.Task: Task:attempt_local1075455800_0001_m_000000_0 is done. And is in the process of committing15/06/15 15:35:28 INFO mapred.LocalJobRunner: map15/06/15 15:35:28 INFO mapred.Task: Task 'attempt_local1075455800_0001_m_000000_0' done.15/06/15 15:35:28 INFO mapred.LocalJobRunner: Finishing task: attempt_local1075455800_0001_m_000000_015/06/15 15:35:28 INFO mapred.LocalJobRunner: map task executor complete.15/06/15 15:35:28 INFO mapred.LocalJobRunner: Waiting for reduce tasks15/06/15 15:35:28 INFO mapred.LocalJobRunner: Starting task: attempt_local1075455800_0001_r_000000_015/06/15 15:35:28 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 115/06/15 15:35:28 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]15/06/15 15:35:28 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@7d27a2b615/06/15 15:35:28 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=333971456, maxSingleShuffleLimit=83492864, mergeThreshold=220421168, ioSortFactor=10, memToMemMergeOutputsThreshold=1015/06/15 15:35:28 INFO reduce.EventFetcher: attempt_local1075455800_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events15/06/15 15:35:28 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1075455800_0001_m_000000_0 decomp: 63 len: 67 to MEMORY15/06/15 15:35:28 INFO reduce.InMemoryMapOutput: Read 63 bytes from map-output for attempt_local1075455800_0001_m_000000_015/06/15 15:35:28 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 63, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->6315/06/15 15:35:28 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning15/06/15 15:35:28 INFO mapred.LocalJobRunner: 1 / 1 copied.15/06/15 15:35:28 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs15/06/15 15:35:28 INFO mapred.Merger: Merging 1 sorted segments15/06/15 15:35:28 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 53 bytes15/06/15 15:35:28 INFO reduce.MergeManagerImpl: Merged 1 segments, 63 bytes to disk to satisfy reduce memory limit15/06/15 15:35:28 INFO reduce.MergeManagerImpl: Merging 1 files, 67 bytes from disk15/06/15 15:35:28 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce15/06/15 15:35:28 INFO mapred.Merger: Merging 1 sorted segments15/06/15 15:35:28 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 53 bytes15/06/15 15:35:28 INFO mapred.LocalJobRunner: 1 / 1 copied.15/06/15 15:35:29 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords15/06/15 15:35:29 INFO mapred.Task: Task:attempt_local1075455800_0001_r_000000_0 is done. And is in the process of committing15/06/15 15:35:29 INFO mapred.LocalJobRunner: 1 / 1 copied.15/06/15 15:35:29 INFO mapred.Task: Task attempt_local1075455800_0001_r_000000_0 is allowed to commit now15/06/15 15:35:29 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1075455800_0001_r_000000_0' to hdfs://localhost:54310/user/hduser/coba-out/_temporary/0/task_local1075455800_0001_r_00000015/06/15 15:35:29 INFO mapred.LocalJobRunner: reduce > reduce15/06/15 15:35:29 INFO mapred.Task: Task 'attempt_local1075455800_0001_r_000000_0' done.15/06/15 15:35:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1075455800_0001_r_000000_015/06/15 15:35:29 INFO mapred.LocalJobRunner: reduce task executor complete.15/06/15 15:35:29 INFO mapreduce.Job: Job job_local1075455800_0001 running in uber mode : false15/06/15 15:35:29 INFO mapreduce.Job: map 100% reduce 100%15/06/15 15:35:29 INFO mapreduce.Job: Job job_local1075455800_0001 completed successfully15/06/15 15:35:29 INFO mapreduce.Job: Counters: 35File System CountersFILE: Number of bytes read=547406FILE: Number of bytes written=1097293FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=74HDFS: Number of bytes written=45HDFS: Number of read operations=13HDFS: Number of large read operations=0HDFS: Number of write operations=4Map-Reduce FrameworkMap input records=1Map output records=4Map output bytes=53Map output materialized bytes=67Input split bytes=113Combine input records=4Combine output records=4Reduce input groups=4Reduce shuffle bytes=67Reduce input records=4Reduce output records=4Spilled Records=8Shuffled Maps =1Failed Shuffles=0Merged Map outputs=1GC time elapsed (ms)=21Total committed heap usage (bytes)=495976448Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0File Input Format Counters Bytes Read=37File Output Format Counters Bytes Written=45hduser@midarto-ThinkPad-Edge-E130:~/coba$ hdfs dfs -ls15/06/15 15:35:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableFound 2 itemsdrwxr-xr-x - hduser supergroup 0 2015-06-15 15:34 cobadrwxr-xr-x - hduser supergroup 0 2015-06-15 15:35 coba-outhduser@midarto-ThinkPad-Edge-E130:~/coba$ hdfs dfs -ls coba-out15/06/15 15:36:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableFound 2 items-rw-r--r-- 1 hduser supergroup 0 2015-06-15 15:35 coba-out/_SUCCESS-rw-r--r-- 1 hduser supergroup 45 2015-06-15 15:35 coba-out/part-r-00000hduser@midarto-ThinkPad-Edge-E130:~/coba$ hdfs dfs -cat coba-out/part-r-0000015/06/15 15:36:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableElektro1Hasanuddi1Teknik1Universitas1hduser@midarto-ThinkPad-Edge-E130:~/coba$ hdfs dfs -ls^Chduser@midarto-ThinkPad-Edge-E130:~/coba$