Hadoop單節點部署步驟

標籤: 僞分佈式 單節點 hadoop 大數據


  • 1.Hadoop概況
  • 2.Hadoop安裝前準備
  • 3.Hadoop安裝步驟
  • 4.Hadoop程序啓動
  • 5.測試配置情況
      wordcount簡單案例測試

小序:Hadoop單節點部署主要涉及到了yarn,mapreduce,hdfs相關進程,裏面涉及部分配置;目前已經測試通過,各功能均可使用,如有不足之處,請指教;

1.Hadoop概況

1.1 Hadoop常見生態架構有:

    1.1.1 Yarn,MapReduce,HDFS
    1.1.2 Hbase:基於的非關係型數據庫
    1.1.3 Hive:提供Sql語句查詢,相當於客戶端,提供給不懂JAVA的人員使用
    1.1.4 Oozie:工作流調度,協作框架
    1.1.5 Zookeeper:是一個高可用的分佈式數據管理與系統協調框架
    1.1.6 Flume:把文件日誌抽取出來放在HDFS集羣中
    1.1.7 Sqoop:把關係型數據庫中的表導入到HDFS

1.2 Hadoop四個重要組成部分

    The project includes these modules:
    1.2.1 Hadoop Common: The common utilities that support the other Hadoop modules.
    1.2.2 Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data.(設計理念:一次寫入多次讀取;包含namenode和datanode兩個進程,namenode存儲元數據:標註數據存儲的位置,datanode:存儲數據)
    1.2.3 Hadoop YARN: A framework for job scheduling and cluster resource management.(資源管理,如:CPU,內存,虛擬代碼等,包含resourcemanager和nodemanager兩個進程)
    1.2.4 Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.
    備註:可參考官方網站:http://hadoop.apache.org/

1.3 Hadoop安裝模式

    1.3.1 單機模式
    1.3.2 僞分佈式模式(單機)
    1.3.3 分佈式模式

1.4 Hadoop版本

    1.4.1 Apache Hadoop,Hadoop 2.0主要由Yahoo獨立出來的hortonworks公司主持開發,是免費項目。
    1.4.2 Cloudera Hadoop,是Cloudera公司的發行版,我們將該版本稱爲CDH(Cloudera Distribution Hadoop)

2 Hadoop安裝前準備

2.1 服務器準備

服務器IP CPU 內存 用戶名 密碼
192.168.x.x 10c 24G hadoop xxxx

2.2 主機名修改

    主機名可以根據自己實際需要修改
    修改方法:[hadoop@xxx hadoop-2.5.0]$ vi /etc/sysconfig/network
    將HOSTNAME設置成自己需要的主機名,如:HOSTNAME=xxx

2.3 創建普通用戶

    這裏創建hadoop用戶,用於hadoop部署
    //創建用戶
    [root@xxx ~]# useradd hadoop
    //創建密碼
    [root@xxx ~]# echo hadoop | passwd –stdin hadoop

2.4 關閉防火牆和selinux

    關閉防火牆:service iptabls stop
    防火牆開機不啓動:chkconfig iptables off
    關閉selinux:vi /etc/sysconfig/selinux
    將SELINUX值修改爲disabled如:SELINUX=disabled

2.5 修改Hosts

    vi /etc/hosts
    添加(ip hostname)如:192.168.x.x xxx

2.6 修改hadoop用戶默認目錄

    [root@xxx ~]# vi /etc/passwd
    將hadoop默認目錄修改爲空間較大的目錄下

2.7 JDK安裝

    2.7.1 下載JDK,並上傳至服務器
    2.7.2 解壓JDK:

[hadoop@xxx software]$ tar zxvf jdk-7u67-linux-x64.tar.gz -C /data/hadoop/modules/

    2.7.3 修改環境變量

vi .bash_profile
JAVA_HOME=/data/hadoop/modules/jdk1.7.0_67
PATH=$PATH:$JAVA_HOME/bin&source .bash_profile

    2.7.5 卸載默認安裝的JDK

//查找已經安裝的JDK
[hadoop@xxx ~]$ rpm -qa | grep java
//卸載安裝的JDK
[hadoop@xxx ~]$ rpm -e --nodeps java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64 tzdata-java-2013g-1.el6.noarch java-1.6.0-openjdk-1.6.0.0-1.66.1.13.0.el6.x86_64

3.Hadoop安裝步驟
3.1 下載需要安裝的Hadoop包
    下載地址:http://hadoop.apache.org/releases.html,本次安裝2.5.0,並上傳至服務器
3.2 解壓安裝包

[hadoop@xxx software]$ tar zxvf hadoop-2.5.0.tar.gz  -C /data/hadoop/modules

3.3 Notepad++工具使用
    可以遠程修改配置,插件->NppFTP->show NppFTP windows 工具下載地址:http://pan.baidu.com/s/1mizCfpQ

3.4 配置修改:
    3.4.1 環境變量修改

hadoop-env.sh修改:export JAVA_HOME=/data/hadoop/modules/jdk1.7.0_67
yarn-env.sh修改  :export JAVA_HOME=/data/hadoop/modules/jdk1.7.0_67
mapred-env.sh修改:export JAVA_HOME=/data/hadoop/modules/jdk1.7.0_67

core-site.xml:

    <property>
        <name>fs.defaultFS</name>
        <!--訪問集羣的入口地址-->
        <value>hdfs://xxx:8020</value> 
    </property>

    <property>
        <name>hadoop.tmp.dir</name>
        <!--定義數據所在目錄-->
        <value>/data/hadoop/modules/hadoop-2.5.0/data</value>
    </property>

hdfs-site.xml:

    <property>
        <!--考慮數據安全,副本數據默認爲3-->
        <name>dfs.replication</name>
        <value>1</value>    
    </property>

    <property>
        <name>dfs.namenode.http-address</name>
        <!--定義namenode所在服務器-->
        <value>xxx:50070</value>
    </property>

yarn-site.xml

    <property>
        <!--聲明哪臺服務器做resourcemanager-->
        <name>yarn.resourcemanager.hostname</name>
        <value>hadoop-senior.ibeifeng.com</value>
    </property>

    <property>
        <!--日誌聚合,將日誌上傳至HDFS-->
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <property>
        <!--日誌聚合週期-->
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>86400</value>
    </property>

    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

mapred-site.xml

    <property>
        <!--查看歷史執行情況-->
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>hadoop-senior.ibeifeng.com:19888</value>
    </property>
    <!--啓動命令:sbin/mr-jobhistory-daemon.sh start historyserver-->

     <property>
        <!--說明MapReduce運行在yarn上-->
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>

配置修改可參考官方文檔:http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-common/SingleCluster.html

4 Hadoop程序啓動

4.1 HDFS啓動

    4.1.1 HDFS第一次啓動,需要先格式化,格式化命令如下:

[hadoop@xxx hadoop-2.5.0]$ bin/hdfs namenode -format

    4.1.2 namenode,datanode起動:

[hadoop@xxx hadoop-2.5.0]$ sbin/hadoop-daemon.sh start namenode
[hadoop@xxx hadoop-2.5.0]$ sbin/hadoop-daemon.sh start datanode

    4.1.3 使用jps命令查看進程是否起動

[hadoop@xxx hadoop-2.5.0]$ jps
101865 DataNode
101753 NameNode

    4.1.4 HDFS頁面登錄
http://137.32.126.106:50070

4.2 yarn進程啓動
    4.2.1 resourcemanager啓動

[hadoop@xxx hadoop-2.5.0]$ sbin/yarn-daemon.sh start resourcemanager

    4.2.2 nodemanager啓動

[hadoop@xxx hadoop-2.5.0]$ sbin/yarn-daemon.sh start nodemanager

    4.2.3 jps命令查看yarn進程啓動情況

[hadoop@xxx hadoop-2.5.0]$ jps
126366 NameNode
130149 NodeManager
130732 JobHistoryServer
5774 Jps
129891 ResourceManager
126477 DataNode
[hadoop@xxx hadoop-2.5.0]$

    4.2.3 頁面訪問地址
http://137.32.126.106:8088/cluster

5 測試配置情況

wordcount簡單案例測試
5.1 在HDFS上創建目錄

[hadoop@xxx hadoop-2.5.0]$ bin/hdfs dfs -mkdir /input

5.2 上傳測試文件

[hadoop@xxx hadoop-2.5.0]$ bin/hdfs dfs -put sort.txt /input

5.3 查看上傳文件

[hadoop@xxx hadoop-2.5.0]$ bin/hdfs dfs -cat  /input/sort.txt

5.4 統計測試

[hadoop@xxx hadoop-2.5.0]$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar wordcount /input/sort.txt /output
17/05/31 13:19:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/05/31 13:19:35 INFO client.RMProxy: Connecting to ResourceManager at xxx/137.32.126.106:8032
17/05/31 13:19:36 INFO input.FileInputFormat: Total input paths to process : 1
17/05/31 13:19:36 INFO mapreduce.JobSubmitter: number of splits:1
17/05/31 13:19:36 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1496207600784_0001
17/05/31 13:19:37 INFO impl.YarnClientImpl: Submitted application application_1496207600784_0001
17/05/31 13:19:37 INFO mapreduce.Job: The url to track the job: http://xxx:8088/proxy/application_1496207600784_0001/
17/05/31 13:19:37 INFO mapreduce.Job: Running job: job_1496207600784_0001
17/05/31 13:19:46 INFO mapreduce.Job: Job job_1496207600784_0001 running in uber mode : false
17/05/31 13:19:46 INFO mapreduce.Job:  map 0% reduce 0%
17/05/31 13:19:52 INFO mapreduce.Job:  map 100% reduce 0%
17/05/31 13:19:59 INFO mapreduce.Job:  map 100% reduce 100%
17/05/31 13:19:59 INFO mapreduce.Job: Job job_1496207600784_0001 completed successfully
17/05/31 13:19:59 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=59
                FILE: Number of bytes written=193963
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=166
                HDFS: Number of bytes written=37
                HDFS: Number of read operations=6
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters 
                Launched map tasks=1
                Launched reduce tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=3925
                Total time spent by all reduces in occupied slots (ms)=4307
                Total time spent by all map tasks (ms)=3925
                Total time spent by all reduce tasks (ms)=4307
                Total vcore-seconds taken by all map tasks=3925
                Total vcore-seconds taken by all reduce tasks=4307
                Total megabyte-seconds taken by all map tasks=4019200
                Total megabyte-seconds taken by all reduce tasks=4410368
        Map-Reduce Framework
                Map input records=5
                Map output records=10
                Map output bytes=110
                Map output materialized bytes=59
                Input split bytes=96
                Combine input records=10
                Combine output records=4
                Reduce input groups=4
                Reduce shuffle bytes=59
                Reduce input records=4
                Reduce output records=4
                Spilled Records=8
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=70
                CPU time spent (ms)=2390
                Physical memory (bytes) snapshot=434692096
                Virtual memory (bytes) snapshot=1822851072
                Total committed heap usage (bytes)=402653184
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=70
        File Output Format Counters 
                Bytes Written=37
[hadoop@xxx hadoop-2.5.0]$ 

5.5 查看統計結果

[hadoop@xxx hadoop-2.5.0]$ bin/hdfs dfs -cat /output/part-r-00000
17/05/31 15:38:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hadoop  4
mapreduce       2
verion  1
yarn    3

5.6 日誌查看
    .log:通過log4j記錄的,記錄大部分應用程序的日誌信息
    .out:記錄標準輸出和標準錯誤日誌,少量記錄
    日誌目錄:/data/hadoop/modules/hadoop-2.5.0/logs

5.7 hdfs 常用shell
    -ls
    -put … 上傳
    -cat -text 查看文件內容
    -mkdir [-p]
    -mv
    -cp
    -du
    -chmod

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章