單節點Hadoop安裝過程

 

1.1.1 環境準備

本次由一臺centos虛擬服務器搭建hadoop平臺,機器信息如表1所示:

      

表1  主機環境準備

名稱

信息

IP

10.1.1.20

hostname

Master.hadoop

       爲方便使用,現給出主機環境方面需要修改的地方:

    ● IP地址修改   

IP地址位於/etc/sysconfig/network-scripts/目錄中,通過vi編輯ifcfg-eth0文件修改成如下所示結構即可:

[root@master network-scripts]# cd /etc/sysconfig/network-scripts/

[root@master network-scripts]# cat ifcfg-eth0

DEVICE="eth0"

ONBOOT=yes

TYPE=Ethernet

BOOTPROTO=none

IPADDR=10.1.1.20

PREFIX=24

GATEWAY=10.1.1.1

DEFROUTE=yes

HWADDR=00:30:16:AF:00:D1

 

       ● hostname修改

   Hostname修改位於/etc/sysconfig/network文件中,其修改後的結果如下所示:

[root@master network-scripts]# cat /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=master.hadoop

 

● DNS修改

DNS位於/etc/hosts文件中,修改的結果如下所示:

  

[root@master network-scripts]# cat /etc/hosts

10.1.1.20  master.hadoop   master

127.0.0.1       localhost.localdomain   localhost

 

 

● 環境測試

       通過ping測試master.hadoop是否暢通:

[root@master network-scripts]# ping master.hadoop

PING master.hadoop (10.1.1.20) 56(84) bytes of data.

64 bytes from master.hadoop (10.1.1.20): icmp_seq=1 ttl=64 time=0.040 ms

64 bytes from master.hadoop (10.1.1.20): icmp_seq=2 ttl=64 time=0.016 ms

--- master.hadoop ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1467ms

rtt min/avg/max/mdev = 0.016/0.028/0.040/0.012 ms

 

1.1.2Java安裝與部署

Hadoop需要java環境支持,通常需要java 1.6版本以上,因此可以通過去java官方網站下載JDK環境,下載地址爲:

http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u25-download-346242.html

從本鏈接中選擇jdk-6u25-linux-x64-rpm.bin,在接受協議後方可下載到本地;

   ● Java安裝

將下載到後java文件傳至master.hadoop主機/home目錄中,下面可以進行對其進行安裝:

[root@master home]# chmod u+x jdk-6u25-linux-x64-rpm.bin

[root@master home]# ./jdk-6u25-linux-x64-rpm.bin

 

    ● Java配置

  Java安裝完畢後,可以對java目錄信息進行環境變量配置,配置信息需增加至文件/etc/profile之中,具體如下所示:

[root@master home]#vi /etc/profile

JAVA_HOME=/usr/java/jdk1.6.0_25

CLASSPATH=.:$JAVA_HOME/lib

 PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin

 

環境變量配置完畢後,通過命令進行檢驗並生效: 

[root@master jdk1.6.0_25]# source /etc/profile

 

1.1.3 SSH配置

通過配置SSH實現基於公鑰方式無密碼登錄,具體操作步驟爲:創建一個新的hadoop帳戶、生成這個帳戶的SSH公鑰、配置公鑰授權文件、設置SSH服務登錄方式等,下面給出具體方式:

 

 

       ● 創建hadoop帳戶

 

[root@master jdk1.6.0_25]# useradd hadoop  #創建帳號

[root@master jdk1.6.0_25]# passwd hadoop  #配置密碼

 

● 生成公鑰

[hadoop@master ~]$ ssh-keygen    #生成SSH認證公鑰,連續回車即可

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/hadoop/.ssh/id_rsa.

Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.

The key fingerprint is:

86:b5:d9:6a:ea:03:4e:5a:97:e5:24:5b:1f:65:41:89 [email protected]

The key's randomart image is:

+--[ RSA 2048]----+

|           ooo   |

|          E +    |

|        .  o     |

|      .o++.      |

|      .OS...     |

|    + +....      |

|   = o  o        |

|  . . .o         |

|     .o.         |

+-----------------+

[hadoop@master ~]$ cd .ssh/

[hadoop@master .ssh]$ ls

id_rsa  id_rsa.pub

 

● 配置授權

[hadoop@master ~]$ cat ~/.ssh/id_rsa.pub  >> ~/.ssh/authorized_keys

[hadoop@master ~]$ chmod 700 ~/.ssh

[hadoop@master ~]$ chmod 600 ~/.ssh/authorized_keys

 

 

●   測試

[hadoop@master jdk1.6.0_25]$ ssh master.hadoop

Last login: Wed Jun 13 18:29:29 2012 from master.hadoop

 

 

1.1.4  Hadoop安裝與配置

使用的Hadoop版本是hadoop-0.20.2.tar.gz,下載地址爲:

http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-0.20.2/hadoop-0.20.2.tar.gz

   ● Hadoop安裝

[root@master home]# tar xzvf hadoop-0.20.2.tar.gz

[root@master home]# mv hadoop-0.20.2 /usr/local

[root@master home]# cd /usr/local

[root@master local]# ls

bin  etc  games  hadoop-0.20.2  include  lib  lib64  libexec  sbin  share  src

[root@master local]# mv hadoop-0.20.2/ hadoop

[root@master local]# ls

bin  etc  games  hadoop  include  lib  lib64  libexec  sbin  share  src

[root@master local]# chown -R hadoop:hadoop /usr/local/hadoop/ #修改權限

    ● 環境變量配置

   跟配置JAVA一樣配置hadoop環境變量,編輯文件/etc/profile,同時也要修改hadoop內部環境變量/hadoop/conf/hadoop_env.sh,具體細節如下所示:

[root@master local]# vi /etc/profile

HADOOP_HOME=/usr/local/hadoop

HADOOP_CONF_DIR=$HADOOP_HOME/conf

CLASSPAH=.:$JAVA_HOME/lib:$HADOOP_HOME/lib

PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$HADOOP_HOME/bin

"/etc/profile" 73L, 1660C written                                

[root@master local]# source /etc/profile

[root@master conf]# vi hadoop-env.sh

 export JAVA_HOME=$JAVA_HOME
 export HADOOP_CLASSPATH="$HADOOP_CLASSPATH"
 export HADOOP_HEAPSIZE=2048
 export HADOOP_LOG_DIR=/var/local/logs
 export HADOOP_PID_DIR=/var/local/pids

[root@master bin]# export JAVA_HOME

[root@master bin]# export HADOOP_HOME

[root@master bin]# export HADOOP_CONF_DIR

    ● hadoop文件配置

配置三個xml文件,分別爲:core-site.xml、hdfs-site.xml、mapred-site.xml,配置效果如下所示:

文件:core-site.xml

[root@master conf]# vi core-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

 

<!-- Put site-specific property overrides in this file. -->

 

<configuration>

     <property>

         <name>fs.default.name</name>

         <value>hdfs://localhost:9000</value>

     </property>

</configuration>

 

文件:hdfs-site.xml

[root@master conf]# vi hdfs-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

 

<!-- Put site-specific property overrides in this file. -->

 

<configuration>

<property>

         <name>dfs.replication</name>

         <value>1</value>

     </property>

 

</configuration>

文件:mapred-site.xml

[root@master conf]# vi mapred-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

 

<!-- Put site-specific property overrides in this file. -->

 

<configuration>

 <property>

         <name>mapred.job.tracker</name>

         <value>localhost:9001</value>

     </property>

 

</configuration>

● hadoop格式化文件系統

 切換到bin目錄,找到可執行文件hadoop,執行文件系統格式化操作:

[root@master bin]# hadoop namenode -format

 

● 啓動hadoop

[root@master bin]# ./start-all.sh

starting namenode, logging to /var/local/logs/hadoop-root-namenode-master.hadoop.out

localhost: starting datanode, logging to /var/local/logs/hadoop-root-datanode-master.hadoop.out

localhost: starting secondarynamenode, logging to /var/local/logs/hadoop-root-secondarynamenode-master.hadoop.out

starting jobtracker, logging to /var/local/logs/hadoop-root-jobtracker-master.hadoop.out

localhost: starting tasktracker, logging to /var/local/logs/hadoop-root-tasktracker-master.hadoop.out

 

1.1.5 Hadoop測試

[root@master hadoop]# jps

2459 JobTracker

2284 DataNode

2204 NameNode

2860 Jps

2382 SecondaryNameNode

2575 TaskTracker

 

發佈了83 篇原創文章 · 獲贊 5 · 訪問量 14萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章