環境:centos6.7,hadoop2.7.3,虛擬機VMware
下載hadoop:http://apache.fayea.com/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
namendoe 192.168.137.9 ; secondnode 192.168.137.15 ; datanode 192.168.137.16
修改三臺主機的/etc/hosts,將namenode,secondnode,datanode信息分別加入
[root@namenode ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.137.9 namenode 192.168.137.15 secondnode 192.168.137.16 datanode
4.官網下載jdk:jdk-8u77-linux-x64.tar.gz
5.安裝java
①yum remove java -y
②tar zxvf jdk-8u77-linux-x64.tar.gz
③mv jdk1.8.0_77 /usr/local/java
④vi /etc/profile
export JAVA_HOME=/usr/local/java exportCLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH=$PATH:$JAVA_HOME/bin
⑤source /etc/profile
[root@namenode src]# java -version java version "1.8.0_77" Java(TM) SE Runtime Environment (build 1.8.0_77-b03) Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)
三臺主機做以上命令操作。
6.環境變量優化:
cat << EOF > ~/.toprc RCfile for "top withwindows" # shameless braggin' Id:a, Mode_altscr=0, Mode_irixps=1,Delay_time=3.000, Curwin=0 Def fieldscur=AEHIOQTWKNMBcdfgjplrSuvyzX winflags=32569, sortindx=10, maxtasks=0 summclr=1, msgsclr=1, headclr=3,taskclr=2 Job fieldscur=ABcefgjlrstuvyzMKNHIWOPQDX winflags=62777, sortindx=0, maxtasks=0 summclr=6, msgsclr=6, headclr=7, taskclr=6 Mem fieldscur=ANOPQRSTUVbcdefgjlmyzWHIKX winflags=62777, sortindx=13, maxtasks=0 summclr=5, msgsclr=5, headclr=4, taskclr=5 Usr fieldscur=ABDECGfhijlopqrstuvyzMKNWX winflags=62777, sortindx=4, maxtasks=0 summclr=3, msgsclr=3, headclr=2, taskclr=3 EOF
繼續環境變量優化:
vim /etc/security/limits.conf
hadoop - nofile 32768 hadoop - nproc 32000
繼續環境變量優化:
vim /etc/pam.d/system-auth
auth required pam_limits.so
所有節點操作。
7.創建hadoop用戶
useradd -u 5000 hadoop && echo"hadoop"|passwd --stdin hadoop
mkdir /data &&chown -R hadoop.hadoop /data
所有節點操作。
8.免密登錄
①su - hadoop
②ssh-keygen
③在namenode上:
vi .ssh/authorized_keys
將所有節點的.ssh/id_rsa.pub 內容加入,然後分發給各個節點。
chmod 600 .ssh/authorized_keys
9.namenode操作:
解壓hadoop,
tar zxvf hadoop-2.7.3.tar.gz
移動目錄:
mv hadoop-2.7.3 /home/hadoop/hadoop2.7.3
10.每個節點操作:
vim /home/hadoop/.bash_profile
修改:
修改: export HADOOP_HOME=/home/hadoop/hadoop2.7.3 export PATH=$PATH:$HADOOP_HOME:$HADOOP_HOME/bin:$HADOOP_HOME/sbin export HADOOP_HOME_WARN_SUPPRESS=1 export PATH
$source /home/hadoop/.bash_profile
11.namenode上操作:
$cd /home/hadoop/hadoop2.7.3/etc/hadoop
$vim hadoop-env.sh
修改:
export JAVA_HOME=/usr/local/java
增加:
export HADOOP_PREFIX=/home/hadoop/hadoop2.7.3 export HADOOP_HEAPSIZE=15000
$vim yarn-env.sh
修改:
export JAVA_HOME=/usr/local/java
$vim mapred-env.sh
修改:
export JAVA_HOME=/usr/local/java
$ vi hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>dfs.namenode.http-address</name> <value>namenode:50070</value> <description> NameNode 通過當前參數 獲得 fsp_w_picpath 和 edits </description> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>secondnode:50090</value> <description> SecondNameNode 通過當前參數 獲得最新的 fsp_w_picpath </description> </property> <property> <name>dfs.replication</name> <value>2</value> <description> 設定 HDFS 存儲文件的副本個數,默認爲3 </description> </property> <property> <name>dfs.namenode.checkpoint.dir</name> <value>file:///home/hadoop/hadoop2.7.3/hdfs/namesecondary</value> <description> 設置 secondary 存放 臨時鏡像 的本地文件系統路徑,如果這是一個用逗號分隔的文件列表,則鏡像將會冗餘複製到所有目錄,只對 secondary 有效 </description> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///data/work/hdfs/name/</value> <description> namenode 用來持續存放命名空間和交換日誌的本地文件系統路徑 </description> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///data/work/hdfs</value> <description> DataNode 在本地存放塊文件的目錄列表,用逗號分隔 </description> </property> <property> <name>dfs.stream-buffer-size</name> <value>131072</value> <description> 默認是4KB,作爲hadoop緩衝區,用於hadoop讀hdfs的文件和寫 hdfs的文件,還有map的輸出都用到了這個緩衝區容量,對於現在的硬件很保守,可以設置爲128k(131072),甚至是1M(太大了map和reduce任務可能會內存溢出) </description> </property> <property> <name>dfs.namenode.checkpoint.period</name> <value>3600</value> <description> 兩次 checkpoints 之間的間隔,單位爲秒,只對 secondary 有效 </description> </property> </configuration>
具體可以查看官網資料:http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
$vim mapred-site.xml
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
$vim yarn-site.xml
修改:
<?xml version="1.0"?> <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
$ vi core-site.xml
修改:
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://namenode:9000/</value> <description> 設定 namenode 的 主機名 及 端口 </description> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/tmp</value> <description> 存放臨時文件的目錄 </description> </property> </configuration>
具體可參考:http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/core-default.xml
12.所有節點上新建目錄
$mkdir /home/hadoop/tmp $mkdir /data/work/hdfs/namesecondary -p
13.namenode上
$start-all.sh