View
104
Download
2
Category
Preview:
DESCRIPTION
These slides cover the very basics of Hadoop architecture, in particular HDFS. This was my presentation in the first Delhi Hadoop User Group (DHUG) meetup held at Gurgaon on 10th September 2011. Loved the positive feedback. I'll also upload a more elaborate version covering Hadoop mapreduce architecture as well soon. Most of the stuff covered in these slides can be found in Tom White's book as well (See the last slide)
Citation preview
Hadoop architectureAn overview
Hari Shankar SreekumarSoftware Engineer @Clickable
Ideas
• Store and process large amounts of data (PetaBytes)
• Scale horizontally • Failure is normal
• Distributed computing (MapReduce)
• Moving computation is cheaper than moving data
What is Hadoop?
HDFSHadoop CommonMapReducePigHiveHBaseZookeeperAvroCassandraMahout. . .. . .. . .
What is Hadoop?
HDFSHadoop CommonMapReducePigHiveHBaseZookeeperAvroCassandraMahout. . .. . .. . .
Hadoop Distributed File System
A distributed filesystem designed for storing very large files with streaming data access running on clusters of commodity hardware.
HDFS has been designed keeping MapReduce in mind
Consists of a cluster of machines, each machine performing one or more of the following roles:
Namenode (Only one per cluster)Secondary namenode (Checkpoint node) (Only one per cluster)Datanodes (Many per cluster)
HDFS Blocks• Blocks in disks: Minimum amount of data that can be read
or written. (~ 512 bytes)• Filesystem blocks: Abstraction over disk blocks. (~ few
kilobytes)• HDFS block: Abstraction over Filesystem blocks, to facilitate
distribution over network and other requirements of Hadoop. Usually 64 MB or 128 MB.
• Block abstraction keeps the design simple. e.g, replication is at block level rather than file level.
• File is split into blocks for storing in HDFS. Blocks of the same file can reside on multiple machines in the cluster.
• Each block is stored as a file in the Local FS of the DataNode.
• Block size does not refer to size on disk. 1 MB file will not take up 64 MB on disk.
Namenode and Datanodes• The "master" node• Maintains the HDFS namespace, filesystem tree and
metadata.• Maintains the mapping from each file to the list of blockIDs
where the file is.• Metadata mapping is maintained in memory as well as
persisted on disk.• Maintains in memory the locations of each block. (Block to
datanode mapping)• Memory requirement: ~150 bytes/file• Issues instructions to datanode to create/replicate/delete
blocks• Single point of failure
Datanodes
• The "slaves"• Serve as storage for data blocks• No metadata• Report all blocks to namenode at startup (BlockReport)• Sends periodic "heartbeat" to Namenode• Serves read, write requests, performs block creation, deletion, and
replication upon instruction from Namenode.• User data never flows through the NameNode.
Secondary namenode/Checkpoint node
• To reduce data-loss risk if Namenode fails.• Persistent data is stored in two files in Namenode - The
FsImage and the Edit log.• Changes in file metadata go into the Edit log.• Secondary namenode periodically merges Edit log with
FsImage.• Data loss will still happen if Namenode fails.• Configure Hadoop to write Editlog into a remote NFS mount
as well. In case of failure, copy metadata files from NFS to Secondary Namenode and run it.
• NFS idea has a (very low) performance impact• Failover is NOT automatic
Image: Hadoop, The definitive Guide (Tom White)
Replication and rack-awareness
• Replication in Hadoop is at the block level.• Replication is "Rack-aware"• Three levels for replication preference:
Same machine > Same rack > Different rack• Replication can be configured per file. Can also be
configured from application• Selection of blocks to process in a MapReduce job takes
advantage of rack-awareness.• Reading and writing on HDFS also makes use of rack-
awareness.• Rack-awareness is NOT automatic, and needs to be
configured. By default, all nodes are assumed to be in the same rack.
Reading from HDFS
Image: Hadoop, The definitive Guide (Tom White)
Failure=>Move to next 'closest' node with the block.Direct connection between client and datanode
Writing to HDFS
Minimum replication for successful write: dfs.replication.minFiles in HDFS are write-once and have strictly one writer at any time.
Image: Hadoop, The definitive Guide (Tom White)
Hadoop Common
File system abstraction:The File System (FS) shell includes various shell-like commands that directly interact with the Hadoop Distributed File System (HDFS) as well as other file systems that Hadoop supports, such as Local FS, HFTP FS, S3 FS, and others.
Service-level authorization:Service Level Authorization is the initial authorization mechanism to ensure clients connecting to a particular Hadoop service have the necessary, pre-configured, permissions and are authorized to access the given service. For example, a MapReduce cluster can use this mechanism to allow a configured list of users/groups to submit jobs.
• A separate 32-bit checksum is created for every io.bytes.per.checksum bytes (Default is 512 bytes. Overhead < 1 %)
• Checksums are stored with each data block.
• Verified after each operation that might result in data corruption. Also checked periodically.
• Can be used in non-HDFS filesystems also.
Data Integrity
Compression utilities• Reduces space usage
• Reduces bandwidth usage
Ref: Hadoop, The definitive Guide (Tom White)
Splittable LZO is available separately and is a good trade-off between compression speed and compressed size.
Serialization utilities
• Extremely important for Hadoop. A good serialization format is Compact, Fast, Extensible and Interoperable.
• Java Serialization is very cumbersome and heavy for Hadoop. So it uses its own serialization, based on the Writable interface.
• Other frameworks such as Avro, Thrift and protocol buffers are also used.
MapReduce Framework
• Jobtracker receives map-reduce job execution request from Client.
• Does sanity checks to see if the job is configured properly.• Computes the input splits.• Loads resources required for the job into HDFS• Assigns splits to tasktrackers for map and reduce phases• Map split assignment is data-locality-aware• Single point of failure
• Tasktracker creates a new process for the task and executes it.
• Sends periodic heartbeats to the Jobtracker, along with other information about the task.
Image: Hadoop, The definitive Guide (Tom White)
References
http://hadoop.apache.org/common/docs/current/hdfs_design.html
Hadoop: The Definitive Guide, by TomWhite. Copyright 2009 Tom White, 978-0-596-52197-4
Recommended