hadoop的下載
使用的是Hadoop 2.6.0-cdh5.15.0這個版本。
之前爲了搭建hadoop,看了很多網站上的文章,到了自己動手,出現一個奇怪的問題。
就是我格式化hdfs以後,datanode節點啓動失敗!
下面看看我的排查問題的過程,希望看到博客的朋友少繞彎路!
先看下我的配置文件
[root@miv hadoop]# cat core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/apps/data/hadoop/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop:9000</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop:9000</value>
</property>
</configuration>
[root@miv hadoop]# cat hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/apps/data/hadoop/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/apps/data/hadoop/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
[root@miv hadoop]# cat slaves
hadoop
[root@miv hadoop]#
問題出現
我使用命令hdfs namenode -format
進行格式化,這個過程很順利,沒有出現什麼問題。
然後使用start-all.sh
命令重啓hadoop以後,還是有問題
這個時候,參考考網上一種方法,說是把data目錄下的current下面的VERSION文件,複製一份,放到data下面。我做了,不過沒啥用!!!
順藤摸瓜,從出問題的datanode入手
發現有個命令hdfs
[root@miv sbin]# hdfs
Usage: hdfs [--config confdir] COMMAND
where COMMAND is one of:
dfs run a filesystem command on the file systems supported in Hadoop.
namenode -format format the DFS filesystem
secondarynamenode run the DFS secondary namenode
namenode run the DFS namenode
journalnode run the DFS journalnode
zkfc run the ZK Failover Controller daemon
datanode run a DFS datanode
dfsadmin run a DFS admin client
diskbalancer Distributes data evenly among disks on a given node
haadmin run a DFS HA admin client
fsck run a DFS filesystem checking utility
balancer run a cluster balancing utility
jmxget get JMX exported values from NameNode or DataNode.
mover run a utility to move block replicas across
storage types
oiv apply the offline fsimage viewer to an fsimage
oiv_legacy apply the offline fsimage viewer to an legacy fsimage
oev apply the offline edits viewer to an edits file
fetchdt fetch a delegation token from the NameNode
getconf get config values from configuration
groups get the groups which users belong to
snapshotDiff diff two snapshots of a directory or diff the
current directory contents with a snapshot
lsSnapshottableDir list all snapshottable dirs owned by the current user
Use -help to see options
portmap run a portmap service
nfs3 run an NFS version 3 gateway
cacheadmin configure the HDFS cache
crypto configure HDFS encryption zones
storagepolicies list/get/set block storage policies
version print the version
Most commands print help when invoked w/o parameters.
[root@miv sbin]#
查看幫助,可以知道使用hdfs datanode就可以直接啓動datanode,那好吧,我敲命令跑一下
出現了一個異常
java.net.BindException: Port in use: localhost:0 Caused by: java.net.BindException: Cannot assign requested address
解決問題,原來時候hosts的鍋
百度搜索這個異常,發現了解決問題的辦法
原來是host的問題,導致datanode啓動失敗
修改host文件
[root@miv sbin]# cat /etc/hosts
192.168.0.119 hadoop
127.0.0.1 localhost localhost.localdomain
::1 localhost localhost.localdomain
重新啓動
重新啓動hadoop,運行正常,完美!!!給自己一波掌聲,哈哈。
[root@miv sbin]# jps
9456 SecondaryNameNode
9126 NameNode
9737 NodeManager
12651 Jps
9276 DataNode
9628 ResourceManager