hadoop的三種模式:
本地模式:本地模擬實現,不使用分佈式文件系統
僞分佈式模式:5個進程在一臺主機上啓動,一般開發人員調試hadoop程序使用
完全分佈式模式:至少3個結點,JobTracker和NameNode在同一臺主機上,secondaryNameNode一臺主機,DataNode和Tasktracker一臺主機
本次試驗環境:
CentOS2.6.32-358.el6.x86_64
jdk-7u21-linux-x64.rpm
hadoop-0.20.2-cdh3u6.tar.gz
一、hadoop僞分佈式模式的配置
[root@localhost ~]# rpm -ivh jdk-7u21-linux-x64.rpm
[root@localhost ~]# vim /etc/profile.d/java.sh
JAVA_HOME=/usr/java/latest
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME PATH
[root@localhost ~]# tar xf hadoop-0.20.2-cdh3u6.tar.gz -C /usr/local/
[root@localhost ~]# cd /usr/local/
[root@localhost local]# ln -sv hadoop-0.20.2-cdh3u6/ hadoop
[root@localhost ~]# vim /etc/profile.d/hadoop.sh
HADOOP_HOME=/usr/local/hadoop
PATH=$HADOOP_HOME/bin:$PATH
export HADOOP_HOME PATH
測試jdk和hadoop是否正確安裝
[root@localhost ~]# java -version
[root@localhost ~]# hadoop version
創建用戶並修改hadoop文件權限
[root@localhost ~]# useradd hduser
[root@localhost ~]# passwd hduser
[root@localhost ~]# chown -R hduser.hduser /usr/local/hadoop/
創建hadoop臨時數據保存目錄
[root@localhost ~]# mkdir /hadoop/temp -pv
[root@localhost ~]# chown -R hduser.hduser /hadoop/
主要腳本功能:
/usr/local/hadoop/bin/start-dfs.sh 啓動namenode datanode secondarynamenode進程
/usr/local/hadoop/bin/start-mapred.sh 啓動jobtracker tasktracker
/usr/local/hadoop/bin/hadoop-daemon.sh 單獨啓動某個進程
/usr/local/hadoop/bin/start-all.sh 啓動全部進程
/usr/local/hadoop/bin/stop-all.sh 停止全部進程
主要配置文件:
/usr/local/hadoop/conf/masters 保存第二名稱節點的位置(secondaryNameNode)
/usr/local/hadoop/conf/slaves 保存從節點的位置(所有運行tasktracker和datanode的結點)
/usr/local/hadoop/conf/core-site.xml 用於定義系統級別的參數
/usr/local/hadoop/conf/hdfs-site.xml HDFS的相關設定
/usr/local/hadoop/conf/mapred-site.xml HDFS的相關設定,如reduce任務的默認個數、任務所能夠使用內存的默認上下限等
/usr/local/hadoop/conf/hadoop-env.sh 定義hadoop運行環境相關的配置信息
讓hadoop啓動起來,只需修改一下配置文件即可
[root@localhost conf]# vim core-site.xml
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>hadoop.tmp.dir</name> <value>/hadoop/temp</value> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:8020</value> </property> </configuration>
[root@localhost conf]# vim mapred-site.xml
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value>localhost:8021</value> </property> </configuration>
[root@localhost conf]# vim hdfs-site.xml
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
配置hduser通過ssh不需要密碼訪問本機
[hduser@localhost ~]$ ssh-keygen -t rsa -P ''
[hduser@localhost .ssh]$ ssh-copy-id -i id_rsa.pub hduser@localhost
[hduser@localhost ~]$ hadoop namenode -format 格式化名稱結點[hduser@localhost ~]$ start-all.sh 啓動服務
[hduser@localhost ~]$ jps 查看進程
NameNode
DataNode
JobTracker
TaskTracker
secondaryNameNode
如果以上5個進程啓動起來,說明hadoop配置成功
hadoop常用命令:
[hduser@localhost ~]$ hadoop 查看幫助
[hduser@localhost ~]$ hadoop fs
[hduser@localhost ~]$ hadoop fs -mkdir test 在HDFS上創建目錄
[hduser@localhost ~]$ hadoop fs -ls 查看文件或目錄
[hduser@localhost ~]$ hadoop fs -put test.txt test 上傳本地文件到HDFS
用hadoop自帶的任務模型測試hadoop可用性:
[hduser@localhost ~]$ hadoop jar /usr/local/hadoop/hadoop-examples-0.20.2-cdh3u6.jar 讀取jar文件
[hduser@localhost ~]$ hadoop jar /usr/local/hadoop/hadoop-examples-0.20.2-cdh3u6.jar wordcount 查看wordcount的語法格式
Usage: wordcount <in> <out>
in讀取文件位置 out保存結果位置(HDFS上,目錄不能事先存在)
[hduser@localhost ~]$ hadoop jar /usr/local/hadoop/hadoop-examples-0.20.2-cdh3u6.jar wordcount test wordcount-out
[hduser@localhost ~]$ hadoop job -list all 查看執行過的作業
[hduser@localhost ~]$ hadoop fs -ls wordcount-out 查看任務輸出結果
[hduser@localhost ~]$ hadoop fs -cat wordcount-out/part-r-00000
hadoop提供的web任務進程查看界面:(訪問需關閉防火牆)
JobTracker的HTTP服務器地址和端口,默認爲0.0.0.0:50030;
TaskTracker的HTTP服務器地址和端口,默認爲0.0.0.0:50060;
NameNode的HTTP服務器地址和端口,默認爲0.0.0.0:50070;
DataNode的HTTP服務器地址和端口,默認爲0.0.0.0:50075;
SecondaryNameNode的HTTP服務器地址和端口,默認爲0.0.0.0:50090;
二、hadoop完全分佈式配置:
NameNode和JobTracker在一個節點上(lab201)
SecondaryNameNode(SNN)在一個節點上(lab202)
DataNode和TaskTracker在一個節點上(lab203)
三個節點上均進行如下操作(注:三個節點保持時間同步)
[root@localhost ~]# rpm -ivh jdk-7u21-linux-x64.rpm
[root@localhost ~]# vim /etc/profile.d/java.sh
JAVA_HOME=/usr/java/latest
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME PATH
[root@localhost ~]# tar xf hadoop-0.20.2-cdh3u6.tar.gz -C /usr/local/
[root@localhost ~]# cd /usr/local/
[root@localhost local]# ln -sv hadoop-0.20.2-cdh3u6/ hadoop
[root@localhost ~]# vim /etc/profile.d/hadoop.sh
HADOOP_HOME=/usr/local/hadoop
PATH=$HADOOP_HOME/bin:$PATH
export HADOOP_HOME PATH
[root@localhost ~]# java -version
[root@localhost ~]# hadoop version
[root@localhost ~]# useradd hduser
[root@localhost ~]# passwd hduser
[root@localhost ~]# chown -R hduser.hduser /usr/local/hadoop/
[root@lab201 ~]# mkdir -pv /hadoop/temp
[root@lab201 ~]# chown -R hduser.hduser /hadoop
主結點配置(lab201):
配置主節點hduser不需要密碼訪問從節點
[root@lab201 ~]# su - hduser
[hduser@lab201 ~]$ ssh-keygen -t rsa -P ''
[hduser@hjlab1 ~]$ ssh-copy-id -i .ssh/id_rsa.pub hduser@localhost
[hduser@lab201 ~]$ ssh-copy-id -i .ssh/id_rsa.pub hduser@lab201
[hduser@lab201 ~]$ ssh-copy-id -i .ssh/id_rsa.pub hduser@lab203
[hduser@lab201 conf]$ vim masters 修改SecondaryNameNode節點
lab202
[hduser@lab201 conf]$ vim slaves 修改從節點
lab203
[hduser@lab201 conf]$ vim core-site.xml
<configuration> <property> <name>hadoop.tmp.dir</name> <value>/hadoop/temp</value> </property> <property> <name>fs.default.name</name> <value>hdfs://lab201:8020</value> </property> </configuration>
[hduser@lab201 conf]$ vim mapred-site.xml
<configuration> <property> <name>mapred.job.tracker</name> <value>lab201:8021</value> </property> </configuration>
[hduser@lab201 conf]$ vim hdfs-site.xml hduser對hadoop要寫寫權限,才能自動創建/hadoop/data /hadoop/name等文件
<configuration> <property> <name>dfs.replication</name> <value>1</value> <description>The actual number of replications can be specified when the file is created.</description> </property> <property> <name>dfs.data.dir</name> <value>/hadoop/data</value> <final>ture</final> <description>The directories where the datanode stores blocks.</description> </property> <property> <name>dfs.name.dir</name> <value>/hadoop/name</value> <final>ture</final> <description>The directories where the namenode stores its persistent matadata.</description> </property> <property> <name>fs.checkpoint.dir</name> <value>/hadoop/namesecondary</value> <final>ture</final> <description>The directories where the secondarynamenode stores checkpoints.</description> </property> </configuration>
將配置文件copy到從節點:
[hduser@lab201 conf]$ scp hdfs-site.xml core-site.xml mapred-site.xml lab2:/usr/local/hadoop/conf/
[hduser@lab201 conf]$ scp hdfs-site.xml core-site.xml mapred-site.xml lab3:/usr/local/hadoop/conf/
[hduser@lab201 conf]$ hadoop namenode -format
[hduser@lab201 conf]$ start-all.sh
[hduser@lab201 conf]$ jps
在從節點上jps查看進程是否啓動,在瀏覽器中打開相關頁面,查看是否能夠正常訪問
功能測試同僞分佈式模式,此處不再介紹