Hadoop全分佈式搭建

   Hadoop全分佈式搭建

STP1

      新建hadoop文件夾【mkdir hadoop

        說明——【hadoop的安裝文件夾】

STP2

      解壓hadoop文件【tar -zxf/opt/software/hadoop-2.5.0-cdh5.3.6.tar.gz -C /opt/hadoop/

STP3

      修改hadoop的名稱【mv /opt/modules/hadoop-2.5.0-cdh5.3.6//opt/ hadoop

/hadoop-2.5.0

        說明——【修改名稱是爲了後面方便管理】

STP4

      修改hadoop的配置文件

        說明——文件路徑【/opt/hadoop/hadoop-2.5.0/etc/hadoop

STP4 - 1

        修改JDK路徑

        修改以下文件中JAVA_HOME配置【hadoop-env.sh、mapred-env.sh、yarn-env.sh 】

        修改內容【export JAVA_HOME=/opt/hadoop/jdk1.7.0_67

       

STP4 - 2

        添加數據節點

        修改以下文件的配置【slaves

        將【localhost】修改爲

                            huayi1.org

                            huayi2.org

                            huayi3.org

                            

 

STP4 - 3

        修改配置XML文件

        修改以下文件中XML配置【core-site.xml、hdfs-site.xml、yarn-site.xml 】

        core-site.xml中修改【

                            <configuration>

                                 <property>

                                <name>fs.defaultFS</name>

                                <value>hdfs://huayi1.org:8020</value>

                                 </property>

                                 <property>

                                <name>hadoop.tmp.dir</name>

                                  <value>/opt/hadoop/hadoop-2.5.0/tmp</value>

                                   </property>

                                   </configuration>

                           

        hdfs-site.xml中修改【

                                  <configuration>

                                  <property>

                                  <name>dfs.replication</name>

                                  <value>3</value>

                                  </property>

                                  <property>

                                  <name>dfs.namenode.http-address</name>

                                  <value>huayi1.org:50070</value>

                                  </property>

                                  <property>

                                  <name>dfs.namenode.secondary.http-address</name>

                                  <value>huayi1.org:50090</value>

                                  </property>

                                  </configuration>

                           

        yarn-site.xml中修改【

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.resourcemanager.hostname</name>

<value>huayi1.org</value>

</property>

<property>

<name>yarn.log-aggregation-enable</name>

<value>true</value>

</property>

<property>

<name>yarn.log-aggregation.retain-seconds</name>

<value>86400</value>

</property>

<property>

<name>mapreduce.jobhistory.address</name>

<value>huayi1.org:10020</value>

</property>

<property>

<name>mapreduce.jobhistory.webapp.address</name>

<value>huayi1.org:19888</value>

</property>

</configuration>

                           

STP4 - 4

        創建集羣緩存文件路徑【mkdir tmp

        說明——路徑【/opt/hadoop/hadoop-2.5.0/

 

STP5

        複製hadoop到子節點機器【scp -rhadoop-2.5.0/ sang@huayi2.org:/opt/hadoop/

       說明——路徑【/opt/hadoop/

STP6

      格式化hadoop

       【bin/hadoopnamenode -format

STP7

     啓動hadoop

     【sbin/start-all.sh

 

 



發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章