hadoop分析之三org.apache.hadoop.hdfs.server.namenode各個類的功能與角色

以hadoop0.21爲例。
NameNode.java: 主要維護文件系統的名字空間和文件的元數據,以下是代碼中的說明。
/**********************************************************
 * NameNode serves as both directory namespace manager and
 * "inode table" for the Hadoop DFS.  There is a single NameNode
 * running in any DFS deployment.  (Well, except when there
 * is a second backup/failover NameNode.)
 *
 * The NameNode controls two critical tables:
 *   1)  filename ->blocksequence (namespace)
 *   2)  block ->machinelist ("inodes")
 *
 * The first table is stored on disk and is very precious.
 * The second table is rebuilt every time the NameNode comes
 * up.
 *
 * 'NameNode' refers to both this class as well as the 'NameNode server'.
 * The 'FSNamesystem' class actually performs most of the filesystem
 * management.  The majority of the 'NameNode' class itself is concerned
 * with exposing the IPC interface and the http server to the outside world,
 * plus some configuration management.
 *
 * NameNode implements the ClientProtocol interface, which allows
 * clients to ask for DFS services.  ClientProtocol is not
 * designed for direct use by authors of DFS client code.  End -users
 * should instead use the org.apache.nutch.hadoop.fs.FileSystem class.
 *
 * NameNode also implements the DatanodeProtocol interface, used by
 * DataNode programs that actually store DFS data blocks.  These
 * methods are invoked repeatedly and automatically by all the
 * DataNodes in a DFS deployment.
 *
 * NameNode also implements the NamenodeProtocol interface, used by
 * secondary namenodes or rebalancing processes to get partial namenode's
 * state, for example partial blocksMap etc.
 **********************************************************/
FSNamesystem.java: 主要維護幾個表的信息:維護了文件名與block列表的映射關係;有效的block的集合;block與節點列表的映射關係;節點與block列表的映射關係;更新的heatbeat節點的LRU cache
/***************************************************
 * FSNamesystem does the actual bookkeeping work for the
 * DataNode.
 *
 * It tracks several important tables.
 *
 * 1)  valid fsname --> blocklist  (kept on disk, logged)
 * 2)  Set of all valid blocks (inverted #1)
 * 3)  block --> machinelist (kept in memory, rebuilt dynamically from reports)
 * 4)  machine --> blocklist (inverted #2)
 * 5)  LRU cache of updated -heartbeat machines
 ***************************************************/
INode.java:HDFS將文件和文件目錄抽象成INode。
/**
 * We keep an in-memory representation of the file/block hierarchy.
 * This is a base INode class containing common fields for file and
 * directory inodes.
 */
FSImage.java:需要將INode信息持久化到磁盤上FSImage上。
/**
 * FSImage handles checkpointing and logging of the namespace edits.
 *
 */
FSEditLog.java:寫Edits文件
/**
 * FSEditLog maintains a log of the namespace modifications.
 *
 */
BlockInfo.java:INode主要是所文件和目錄信息的,而對於文件的內容來說,這是用block描述的。我們假設一個文件的長度大小爲Size,那麼從文件的0偏移開始,按照固定大小,順序對文件劃分並編號,劃分好的每一塊爲一個block
/**
 * Internal class for block metadata.
 */
DatanodeDescriptor.java:代表的具體的存儲對象。
/**************************************************
 * DatanodeDescriptor tracks stats on a given DataNode,
 * such as available storage capacity, last update time, etc.,
 * and maintains a set of blocks stored on the datanode.
 *
 * This data structure is a data structure that is internal
 * to the namenode. It is *not* sent over- the- wire to the Client
 * or the Datnodes. Neither is it stored persistently in the
 * fsImage.

 **************************************************/
FSDirectory.java: 代表了HDFS中的所有目錄和結構屬性
/*************************************************
 * FSDirectory stores the filesystem directory state.
 * It handles writing/loading values to disk, and logging
 * changes as we go.
 *
 * It keeps the filename->blockset mapping always- current
 * and logged to disk.
 *
 *************************************************/
EditLogOutputStream.java:所有的日誌記錄都是通過EditLogOutputStream輸出,在具體實例化的時候,這一組EditLogOutputStream包含多個EditLogFIleOutputStream和一個EditLogBackupOutputStream
/**
 * A generic abstract class to support journaling of edits logs into
 * a persistent storage.
 */
EditLogFileOutputStream.java:將日誌記錄寫到edits或edits.new中。
/**
 * An implementation of the abstract class {@link EditLogOutputStream}, which
 * stores edits in a local file.
 */
EditLogBackupOutputStream.java:將日誌通過網絡發送到backupnode上。
/**
 * An implementation of the abstract class {@link EditLogOutputStream},
 * which streams edits to a backup node.
 *
 * @see org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol#journal
 * (org.apache.hadoop.hdfs.server.protocol.NamenodeRegistration,
 *  int, int, byte[])
 */
BackupNode.java:name Node的backup:升級階段:Secondary Name Node -》Checkpoint Node(定期保存元數據,定期checkpoint) -》Backup Node(在內存中保持一份和Name Node完全一致的鏡像,當元數據發生變化時,其元數據進行更新,可以利用自身的鏡像來checkpoint,無需從nameNode下載)-》Standby Node(可以進行熱備)
/**
 * BackupNode.
 * <p>
 * Backup node can play two roles.
 * <ol>
 * <li>{@link NamenodeRole#CHECKPOINT} node periodically creates checkpoints,
 * that is downloads image and edits from the active node, merges them, and
 * uploads the new image back to the active. </li>
 * <li>{@link NamenodeRole#BACKUP} node keeps its namespace in sync with the
 * active node, and periodically creates checkpoints by simply saving the
 * namespace image to local disk(s).</li>
 * </ol>
 */
BackupStorage.java:在Backup Node備份目錄下創建jspool,並創建edits.new,將輸出流指向edits.new
  /**
   * Load checkpoint from local files only if the memory state is empty.<br>
   * Set new checkpoint time received from the name -node. <br>
   * Move <code>lastcheckpoint.tmp </code> to <code>previous.checkpoint</code> .
   * @throws IOException
   */
TransferFsImage.java:負責從name Node去文件。
/**
 * This class provides fetching a specified file from the NameNode.
 */
GetImageServlet.java:是httpServlet的子類,處理doGet請求。
/**
 * This class is used in Namesystem's jetty to retrieve a file.
 * Typically used by the Secondary NameNode to retrieve image and
 * edit file for periodic checkpointing.
 */
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章