hadoop集羣安裝(先安裝jdk,zookeeper,再安裝hadoop)
①創建hadoop用戶
groupadd hadoop #創建hadoop組
useradd -g hadoop hadoop #創建hadoop用戶
passwd hadoop #創建hadoop密碼
②配置免密碼登錄
#ssh-keygen -f .ssh/id_rsa -N ""
#生成祕鑰和公鑰
ssh-keygen -t rsa
#將公鑰寫入authorized_keys
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
#將每一個節點的公鑰寫入主節點的authorized_keys
cat ~/.ssh/id_rsa.pub | ssh 用戶名@ip 'cat >> ~/.ssh/authorized_keys'
[root@hadoop ~]# cd /home/hadoop/.ssh
[root@hadoop .ssh]# chmod 710 authorized_keys #使用默認的權限時,普通用戶的免密碼認證無效,可試試600
#將寫入所有節點公鑰的authorized_keys發放到每個節點上
scp authorized_keys [email protected]:~/.ssh/
③下載並拷貝hadoop-2.7.3.tar.gz到soft目錄
mkdir /soft;cd /soft;tar -zxvf hadoop-2.7.3.tar.gz #解壓
ln -s hadoop-2.7.3 hadoop #創建軟鏈接
#更改目錄所屬用戶(用root進行更改)爲hadoop
chown -R hadoop:hadoop soft*
④修改各配置文件(6個文件),使用hadoop用戶
hadoop/etc/hadoop/hadoop-env.sh #修改JAVA_HOME
hadoop/etc/hadoop/yarn-env.sh #修改JAVA_HOME
hadoop/etc/hadoop/core-site.xml #增加屬性,參考官方配置文檔
hadoop/etc/hadoop/hdfs-site.xml #增加屬性,參考官方配置文檔
#cp mapred-site.xml.template mapred-site.xml
hadoop/etc/hadoop/mapred-site.xml
hadoop/etc/hadoop/yarn-site.xml #增加屬性,參考官方配置文檔
#具體內容請觀察文件末尾
⑤增加hadoop環境變量
vi ~/.bashrc
export HADOOP_HOME=/soft/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
#使環境變量生效
source ~/.bashrc
#scp -qr ./hadoop-2.7.3 hostname@ip:/soft/hadoop-2.7.3
⑥啓動:
a.先啓動所有節點上的journalnode進程,每個節點都啓動
cd /soft/hadoop
./sbin/hadoop-daemon.sh start journalnode
b.再格式化hdfs,只在主節點操作
./bin/hdfs namenode -format #格式化namenode
./bin/hdfs zkfc -formatZK #格式化高可用
./bin/hdfs namenode #啓動namenode
c.同步主節點和備節點之間的元數據,在namenode存活的情況下在備用節點上執行
./bin/hdfs namenode -bootstrapStandby
#同步完數據之後,在主節點按下ctrl+c 結束namenode進程。
d.關閉所有節點上的journalnode進程
./sbin/hadoop-daemon.sh stop journalnode
e.單獨啓動一個zkfc進程
./sbin/hadoop-daemon.sh start zkfc
f.以上無誤後,啓動hdfs所有相關進程
./sbin/start-dfs.sh #./sbin/stop-dfs.sh
g.啓動yarn
./sbin/start-yarn.sh #./sbin/stop-yarn.sh
h.在備用節點上執行
./sbin/yarn-daemon.sh start resourcemanager
i.檢查ResourceManager狀態
./bin/yarn rmadmin -getServiceState rm1
./bin/yarn rmadmin -getServiceState rm2
j.Wordcount示例測試
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /test/test.txt /test/out/
⑦停止:
./sbin/stop-all.sh
⑧強制Active/Standby切換命令
#NN切換
hdfs haadmin -transitionToActive/transitionToStandby -forcemanual nn1
#RM切換
yarn rmadmin -transitionToActive/transitionToStandby -forcemanual rm1
#這樣做的後果是ZKFC將停止工作,不會再有自動故障切換的保障。
⑨其他命令
hdfs dfsadmin -refreshNodes #重新讀取hosts和exclude文件
hdfs dfsadmin -safemode #安全模式維護命令
hdfs dfsadmin -report #報告文件系統的基本信息和統計信息
hdfs操作:
vi test.txt
hadoop apache
hadoop ywendeng
hadoop tomcat
hdfs dfs -mkdir /test #在hdfs上創建一個文件目錄
hdfs dfs -put test.txt /test #向hdfs上傳一個文件
hdfs dfs -ls/test #查看test.txt是否上傳成功
ntp:
service ntpd start
service ntpd status
service ntpd stop
-------------------------------------------------------------------------------------------
故障:
Exception:原因-hadoop位數和操作系統不一致導致的
Java HotSpot(TM) Client VM warning:
You have loaded library /soft/hadoop-2.7.3/lib/native/libhadoop.so.1.0.0
which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>',
or link it with '-z noexecstack'.
17/11/03 01:20:27 WARN util.NativeCodeLoader:
Unable to load native-hadoop library for your platform...
using builtin-java classes where applicable
17/11/03 10:52:46 INFO ipc.Client: Retrying connect to server:
hadoop1/192.8.8.12:8033. Already tried 0 time(s);
retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
Operation failed: Call From hadoop1/192.8.8.12 to hadoop1:8033 failed on connection exception:
java.net.ConnectException: 拒絕連接; For more details see:
http://wiki.apache.org/hadoop/ConnectionRefused
#hadoop fs -ls hdfs://192.8.8.11:8033
#有一個resourcemanager沒有開啓
#datanode沒啓動起來可能是datanode與namenode#./current/VERSION中的clusterID不同
Hadoop 2.7.3 集羣環境安裝
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.