方法一:將SecondaryNameNode中數據拷貝到NameNode存儲數據的目錄;
1. kill -9 NameNode進程
2. 刪除NameNode存儲的數據(/opt/module/hadoop-2.7.2/data/tmp/dfs/name)
[*****@hadoop102 hadoop-2.7.2]$ rm -rf /opt/module/hadoop-2.7.2/data/tmp/dfs/name/*
3. 拷貝SecondaryNameNode中數據到原NameNode存儲數據目錄
[*****@hadoop102 dfs]$ scp -r *****@hadoop104:/opt/module/hadoop-2.7.2/data/tmp/dfs/namesecondary/* ./name/
4. 重新啓動NameNode
[*****@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start namenode
方法二:使用-importCheckpoint選項啓動NameNode守護進程,從而將SecondaryNameNode中數據拷貝到NameNode目錄中。
- 修改hdfs-site.xml中的
<property>
<name>dfs.namenode.checkpoint.period</name>
<value>120</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/module/hadoop-2.7.2/data/tmp/dfs/name</value>
</property>
2. kill -9 NameNode進程
3. 刪除NameNode存儲的數據(/opt/module/hadoop-2.7.2/data/tmp/dfs/name)
[*****@hadoop102 hadoop-2.7.2]$ rm -rf /opt/module/hadoop-2.7.2/data/tmp/dfs/name/*
4. 如果SecondaryNameNode不和NameNode在一個主機節點上,需要將SecondaryNameNode存儲數據的目錄拷貝到NameNode存儲數據的平級目錄,並刪除in_use.lock文件
[*****@hadoop102 dfs]$ scp -r *****@hadoop104:/opt/module/hadoop-2.7.2/data/tmp/dfs/namesecondary ./
[*****@hadoop102 namesecondary]$ rm -rf in_use.lock
[*****@hadoop102 dfs]$ pwd
/opt/module/hadoop-2.7.2/data/tmp/dfs
[*****@hadoop102 dfs]$ ls
data name namesecondary
5. 導入檢查點數據(等待一會ctrl+c結束掉)
[*****@hadoop102 hadoop-2.7.2]$ bin/hdfs namenode -importCheckpoint
6. 啓動NameNode
[*****@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start namenode