大數據相關jar包可在https://www.siyang.site/portfolio/出下載
平臺結構爲上圖
-
配置ssh
每個節點操作
ssh localhost
-
配置ssh免祕鑰登錄
node1配置
ssh-keygen -t dsa -P '' -f /root/.ssh/id_dsa
會在.ssh目錄中生成2個文件:id_dsa及id_dsa.pub
去.ssh目錄將id_dsa.pub複製一份爲authorized_keys
cp id_dsa.pub authorized_keys
將node1的id_dsa.pub複製到node2、node3、node4的/root/.ssh下
scp /root/.ssh/id_dsa.pub root@node2:/root/.ssh/node1.pub
scp /root/.ssh/id_dsa.pub root@node3:/root/.ssh/node1.pub
scp /root/.ssh/id_dsa.pub root@node4:/root/.ssh/node1.pub
node2、node3、node4的/root/.ssh/node1.pub下改名爲authorized_keys
cp node1.pub authorized_keys
-
裝JDK
每個節點操作
利用sshfile傳輸jdk至root目錄
安裝jdk
rpm -i jdk-7u67-linux-x64.rpm
安裝目錄爲/usr/java
配置環境變量
/etc/profile文件最下添加
export JAVA_HOME=/usr/java/jdk1.7.0_67
export PATH=$PATH:$JAVA_HOME/bin
保存後刷新配置文件
source /etc/profile
-
裝hadoop
利用sshfile傳輸hadoop至root目錄
安裝hadoop
tar xf hadoop-2.6.5.tar.gz
將hadoop移動位置
mkdir /opt/home(每個節點都要創建)
mv hadoop-2.6.5 /opt/home/
-
配置hadoop環境變量
export HADOOP_HOME=/opt/home/hadoop-2.6.5
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
*此處可用scp將/etc/profile 覆蓋給node2-4的/etc/profile
scp /etc/profile root@node*:/etc/profile
保存後刷新配置文件
source /etc/profile
-
修改hadoop配置文件
*僅修改node1,node2-4可用scp傳過去(需要node2-4創建/opt/home/文件夾)
scp -r hadoop-2.6.5/ root@node1:/opt/home/
修改hadoop-2.6.5/etc/hadoop/hadoop-env.sh
將export JAVA_HOME=${JAVA_HOME}
改爲絕對路徑export JAVA_HOME=/usr/java/jdk1.7.0_67
修改hadoop-2.6.5/etc/hadoop/mapred-env.sh
將# export JAVA_HOME=/home/y/libexec/jdk1.6.0/
export JAVA_HOME=/usr/java/jdk1.7.0_67
修改hadoop-2.6.5/etc/hadoop/yarn-env.sh
將# export JAVA_HOME=/home/y/libexec/jdk1.6.0/
改爲絕對路徑export JAVA_HOME=/usr/java/jdk1.7.0_67
修改hadoop-2.6.5/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://node1:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>#hdfs文件目錄位置
<value>/var/home/hadoop/full</value>
</property>
</configuration>
修改hadoop-2.6.5/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node2:50090</value>
</property>
</configuration>
修改hadoop-2.6.5/etc/hadoop/slaves
node2
node3
node4
-
格式化namenode(裝完最開始執行一次)
hdfs namenode -format
-
開啓hadoop
start-dfs.sh