安裝Hadoop(僞分佈式環境)namenode和datanode無法啓動解決方案
先附上我參考的安裝教程鏈接
10.1.88.4/index_1.php?url=http://www.msftconnecttest.com/redirect
我在執行./start-all.sh之後發現,沒有任何錯誤提示,輸入jps得到如下結果:
[hadoop@localhost sbin]$ ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/software/hadoop_install/hadoop/logs/hadoop-hadoop-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /usr/software/hadoop_install/hadoop/logs/hadoop-hadoop-datanode-localhost.localdomain.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/software/hadoop_install/hadoop/logs/hadoop-hadoop-secondarynamenode-localhost.localdomain.out
starting yarn daemons
resourcemanager running as process 21995. Stop it first.
localhost: nodemanager running as process 22133. Stop it first.
[hadoop@localhost sbin]$ jps
22133 NodeManager
23848 Jps
21995 ResourceManager
明顯沒有datanode和namenode,上網找了很多方法都沒用。
按照網上的方法,我就查看文件夾data/tmp/data發現我根本沒有這個目錄。一臉懵逼。
我只好查看$HADOOP_HOME/log裏面的文件,查看有關於datanode和namenode的日誌,
我先查看的是datanode的日誌,
有點多,直接劃到最後,(看我加粗字體)
2019-11-02 17:35:59,401 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2019-11-02 17:36:00,195 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /usr/software/hadoop_install/hadoop/data/dfs/data :
java.io.FileNotFoundException: File file:/usr/software/hadoop_install/hadoop/data/dfs/data does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:635)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:861)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:625)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
at org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2544)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2729)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2753)
2019-11-02 17:36:00,207 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/usr/software/hadoop_install/hadoop/data/dfs/data"
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2631)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2544)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2729)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2753)
2019-11-02 17:36:00,208 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2019-11-02 17:36:00,216 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at localhost/127.0.0.1
************************************************************/
[hadoop@localhost logs]$
我頓時恍然大悟,,肯定是權限不夠,看不到data,我立馬回到hadoop的安轉目錄下查看文件的權限情況
[hadoop@localhost hadoop]$ ls -l
總用量 128
drwxr-xr-x. 2 hadoop hadoop 194 11月 2 17:50 bin
drwxr-xr-x. 2 root root 6 11月 2 16:58 data
drwxr-xr-x. 3 hadoop hadoop 20 11月 2 16:57 etc
drwxr-xr-x. 2 hadoop hadoop 106 9月 10 2018 include
drwxr-xr-x. 3 hadoop hadoop 20 9月 10 2018 lib
drwxr-xr-x. 2 hadoop hadoop 239 9月 10 2018 libexec
-rw-r--r--. 1 hadoop hadoop 99253 9月 10 2018 LICENSE.txt
drwxrwxr-x. 3 hadoop hadoop 4096 11月 2 17:36 logs
-rw-r--r--. 1 hadoop hadoop 15915 9月 10 2018 NOTICE.txt
-rw-r--r--. 1 hadoop hadoop 1366 9月 10 2018 README.txt
drwxr-xr-x. 2 hadoop hadoop 4096 9月 10 2018 sbin
drwxr-xr-x. 4 hadoop hadoop 31 9月 10 2018 share
drwxr-xr-x. 2 root root 27 11月 2 16:23 test
果然 ,根據紅色字體能發現,data的權限所有者是root的,hadoop根本就不能操作,我就想肯定是一開始創建的時候濫用了root用戶
到這裏就很簡單了,兩行命令即可:
# 修改文件權限擁有者,hadoop是我的用戶名,data是文件夾名字
sudo chown -R hadoop data
# 修改文件權限組
sudo chgrp -R hadoop data
修改過後,查看一下修改結果,可以看到修改成功:
[hadoop@localhost hadoop]$ ls -l
總用量 128
drwxr-xr-x. 2 hadoop hadoop 194 11月 2 17:50 bin
drwxr-xr-x. 2 hadoop hadoop 6 11月 2 16:58 data
drwxr-xr-x. 3 hadoop hadoop 20 11月 2 16:57 etc
drwxr-xr-x. 2 hadoop hadoop 106 9月 10 2018 include
drwxr-xr-x. 3 hadoop hadoop 20 9月 10 2018 lib
drwxr-xr-x. 2 hadoop hadoop 239 9月 10 2018 libexec
-rw-r--r--. 1 hadoop hadoop 99253 9月 10 2018 LICENSE.txt
drwxrwxr-x. 3 hadoop hadoop 4096 11月 2 17:36 logs
-rw-r--r--. 1 hadoop hadoop 15915 9月 10 2018 NOTICE.txt
-rw-r--r--. 1 hadoop hadoop 1366 9月 10 2018 README.txt
drwxr-xr-x. 2 hadoop hadoop 4096 9月 10 2018 sbin
drwxr-xr-x. 4 hadoop hadoop 31 9月 10 2018 share
drwxr-xr-x. 2 root root 27 11月 2 16:23 test
然後再回去停止剛纔執行的所有node
[hadoop@localhost sbin]$ ./stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [localhost]
localhost: no namenode to stop
localhost: no datanode to stop
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: no secondarynamenode to stop
stopping yarn daemons
stopping resourcemanager
localhost: stopping nodemanager
localhost: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
no proxyserver to stop
最後就是啓動所有node
[hadoop@localhost sbin]$ ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/software/hadoop_install/hadoop/logs/hadoop-hadoop-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /usr/software/hadoop_install/hadoop/logs/hadoop-hadoop-datanode-localhost.localdomain.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/software/hadoop_install/hadoop/logs/hadoop-hadoop-secondarynamenode-localhost.localdomain.out
starting yarn daemons
starting resourcemanager, logging to /usr/software/hadoop_install/hadoop/logs/yarn-hadoop-resourcemanager-localhost.localdomain.out
localhost: starting nodemanager, logging to /usr/software/hadoop_install/hadoop/logs/yarn-hadoop-nodemanager-localhost.localdomain.out
輸入jps命令查看啓動情況:
[hadoop@localhost sbin]$ jps
36534 DataNode
36343 NameNode
37097 NodeManager
36762 SecondaryNameNode
36954 ResourceManager
37422 Jps
可以看到所有的DataNode和NameNode都已經成功啓動。
激動萬分,終於弄出來了,哈哈大家要是哪裏對不上或者是有其他問題,可以留言問我,我最近裝這個裝了好幾遍哈哈。