Tuesday, November 13, 2012

Hadoop Cluster – The Anatomy of Hadoop Pipeline Write


Now we know about Big Data, HDFS, Map Reduce and different types of Hadoop Nodes. In this post, I will be touching the how the client writes the files with in the Hadoop Cluster with different options. A client is having a file 1 which is split into three blocks named A, B and C as depicted in the below figure.

Writing a block on the different data nodes with in same rack space:-
Step 1:- In the client will send its request to “name node” by saying I have three blocks of file 1. Please let me know how could I write these blocks to which nodes.
Step 2:- Name Node will reply by returning the names of data node 1 is used for Block A, data node 2 is for block B and data node 3 is for block C.
Step 3:- Client will write block A on data node 1. Block B writing will not start till the client will get the acknowledgement of block A. Thereafter it will write C.

Figure 1

As per figure 1, it doesn’t maintain the redundancy and availability in case of failure of any of the data nodes or TOR switch (Top of Rack).

Data Node 1 will automatically look for the nearest data node to replicate its block A to the nearest available data node. The same process will be done by data node 2 and 3 for replication of block B and C within the same rack as depicted in Figure 2. The advantage of replication will help to maintain the redundancy and availability in case of failure of any of the data nodes as depicted in figure 2 but couldn’t provide availability in case of failure of rack.
Note the above implementation is based on the replica of 2.
Figure 2

To maintain high uptime, there is an option of “Hadoop Pipeline Write”. Client will send write request of block A on data node 1. Once the request is received, data node 1 will look for another nearest available data node but not in the same rack but in the different rack as depicted in the figure 3. Now data node 1 will copy the block A to data node 4 and further data node 4 will copy it to data node 5 by creating a pipeline write. Similar ways all the blocks will be get copied to the respective nodes.
Data Node 5 will acknowledge about block A to 4 and data node will acknowledge to data node 1, data node 1 will send the acknowledgement to client.
Figure 3

The underneath network could be a layer 2 or layer 3. If it is layer 2 network, the loops must be avoided to use the full optimal bandwidth because The Anatomy of Hadoop Pipeline requires more bandwidth.
Figure 3 implementation is based on the replica of 3.

People who read this post also read :



1 comment:

Anonymous said...

Are you sure about this one
"Step 3:- Client will write block A on data node 1. Block B writing will not start till the client will get the acknowledgement of block A. Thereafter it will write C."

As far as i know these are done in parallel