hadoop環境配置(虛擬機安裝、JDK、SSH免密登錄)

宿主機:

操作系統:Microsoft Windows 7 旗艦版 Service Pack 1 (build 7601), 32-bit

處理器:Intel(R) Core(TM)2 Duo CPU     T5870  @ 2.00GHz 雙核

內存:3.00 GB

硬盤:日立 HITACHI HTS543225L9SA00(250GB)  使用時間:5922小時  溫度:45℃

IP地址:192.168.0.17(內網)  

 

一、服務器規劃

3個虛擬linux服務器,每個分配8G硬盤,256M內存,一個作爲master,另外2個作爲slave,都建立hadoop用戶。

Master 機器主要配置NameNode 和JobTracker 的角色,負責總管分佈式數據和分解任務的執行;

2 個Salve 機器配置DataNode 和TaskTracker 的角色,負責分佈式數據存儲以及任務的執行。

 

1)master節點:

虛擬機名稱:Red Hat_1_Master

hostname:master.hado0p

IP:192.168.70.101

2)slave01節點

虛擬機名稱:Red Hat_2_Slave01

hostname:slave01.hadoop

IP:192.169.70.102

3)slave02節點

虛擬機名稱:Red Hat_3_Slave02

hostname:slave02.hadoop

IP:192.168.70.103

 

二、安裝虛擬機

虛擬機:vmware workstation 9.0

操作系統:rhel-server-6.4-i386

安裝過程略,說一下clone,裝完一個redhat後,可使用clone方法,克隆另外2個虛擬系統

方法:

Red Hat_1_Master power off,右鍵Manage-->Clone


 

Next,選擇 create a full clone


 

命名虛擬機名稱,選擇目錄

 完成clone。



 三、IP,hosts配置(root用戶)

按照一的規劃配置IP。

注意:clone的系統,網卡eth0的MAC地址都一樣,先將clone出的2臺mac改掉:

vi /etc/udev/rules.d/70-persistent-net.rules

刪掉eth0的配置,將eth1改爲eth0,保存退出;

vi /etc/sysconfig/network-scripts/ifcfg-eth0

將HWADDR值改爲rules文件修改後的eth0 MAC,over。

1、修改hostname

vi /etc/hosts

每個節點的hosts文件,添加:

192.168.70.101  master.hadoop
192.168.70.102  slave01.hadoop
192.168.70.103  slave02.hadoop

保存,退出。

2、修改固定IP  

 

vi /etc/sysconfig/network-scripts/ifcfg-eth0

 

DEVICE="eth0"
BOOTPROTO=static    #靜態IP
HWADDR="00:0c:29:cd:32:a1"
IPADDR=192.168.70.102    #IP地址
NETMASK=255.255.255.0   #子網掩碼
GATEWAY=192.168.70.2    #默認網關
IPV6INIT="yes"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
UUID="dd883cbe-f331-4f3a-8972-fd0e24e94704"

 

 

保存,退出;其它二臺如法炮製。

3、

 

vi /etc/sysconfig/network

 

NETWORKING=yes
HOSTNAME=slave01.hadoop    #hostname
GATEWAY=192.168.70.2    #默認網關

 

保存,退出;其它二臺同樣修改。

 

重啓network服務,查看IP,宿主機、三臺虛擬機器互ping。

 

四、安裝JDK    jdk-6u45-linux-i586.bin

從宿主機通過sftp上傳到master節點 /usr目錄

 

chmod +x jdk-6u45-linux-i586.bin
./jdk-6u45-linux-i586.bin

 

當前目錄會unpack出jdk1.6.0_45目錄,修改系統環境變量(所有用戶生效)

 

vi /etc/profile

 

末尾添加:

 

JAVA_HOME=/usr/jdk1.6.0_45
export JAVA_HOME
PATH=$PATH:$JAVA_HOME/bin
export PATH
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export CLASSPATH

 

 

保存,退出;其它二臺做同樣配置。

加載環境變量:

source /etc/profile

測試:

java -version

 

五、ssh免密登錄

1、開啓認證(root)

[root@master usr]# vi /etc/ssh/sshd_config

 

#解除註釋
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile      .ssh/authorized_keys
#AuthorizedKeysCommand none
#AuthorizedKeysCommandRunAs nobody

 2、生成證書公私密鑰(hadoop)

[master@master ~]$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
Generating public/private dsa key pair.
Your identification has been saved in /home/master/.ssh/id_dsa.
Your public key has been saved in /home/master/.ssh/id_dsa.pub.
The key fingerprint is:
f5:a6:3b:8f:cf:dd:e7:94:1f:64:37:f4:44:b1:71:e8 [email protected]
The key's randomart image is:
+--[ DSA 1024]----+
|               ++|
|              ..+|
|          .  . o.|
|         . .  E..|
|        S   o  +o|
|           o  o +|
|          .    o.|
|          .+ . o+|
|          o++ ..=|
+-----------------+
[master@master ~]$ 

 

   

Id_dsa.pub爲公鑰,id_dsa爲私鑰

    將公鑰文件複製成authorized_keys文件 到.ssh目錄

[master@master ~]# cd .ssh/
[master@master .ssh]# ls
id_dsa  id_dsa.pub  known_hosts
[master@master .ssh]# 
[master@master .ssh]# 
[master@master .ssh]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[master@master .ssh]# 
[master@master .ssh]# 
[master@master .ssh]$ ll
total 16
-rw-------. 1 master master  610 Oct 21 06:49 authorized_keys
-rw-------. 1 master master  668 Oct 21 06:48 id_dsa
-rw-r--r--. 1 master master  610 Oct 21 06:48 id_dsa.pub
-rw-r--r--. 1 master master 1181 Oct 21 06:50 known_hosts

 

 

 3、測試登錄

[hadoop@master .ssh]$ ssh localhost

 期望結果是不需要輸入密碼,事與願違:

[hadoop@master .ssh]$ ssh localhost
hadoop@localhost's password: 

 找原因:

[hadoop@master .ssh]$ ssh -v localhost
OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to localhost [::1] port 22.
debug1: Connection established.
debug1: identity file /home/hadoop/.ssh/identity type -1
debug1: identity file /home/hadoop/.ssh/id_rsa type -1
debug1: identity file /home/hadoop/.ssh/id_dsa type 2
...
...
debug1: No valid Key exchange context
debug1: Next authentication method: gssapi-with-mic
debug1: Unspecified GSS failure.  Minor code may provide more information
Credentials cache file '/tmp/krb5cc_500' not found

debug1: Unspecified GSS failure.  Minor code may provide more information
Credentials cache file '/tmp/krb5cc_500' not found

debug1: Unspecified GSS failure.  Minor code may provide more information


debug1: Unspecified GSS failure.  Minor code may provide more information
Credentials cache file '/tmp/krb5cc_500' not found

debug1: Next authentication method: publickey
debug1: Trying private key: /home/hadoop/.ssh/identity
debug1: Trying private key: /home/hadoop/.ssh/id_rsa
debug1: Offering public key: /home/hadoop/.ssh/id_dsa
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Next authentication method: password
hadoop@localhost's password:

 切root,看日誌:

[root@master ~]# tail -30 /var/log/secure
Sep  7 05:57:59 master sshd[3960]: Received signal 15; terminating.
Sep  7 05:57:59 master sshd[4266]: Server listening on 0.0.0.0 port 22.
Sep  7 05:57:59 master sshd[4266]: Server listening on :: port 22.
Sep  7 05:58:05 master sshd[4270]: Accepted password for root from ::1 port 44995 ssh2
Sep  7 05:58:06 master sshd[4270]: pam_unix(sshd:session): session opened for user root by (uid=0)
Sep  7 06:06:43 master su: pam_unix(su:session): session opened for user hadoop by root(uid=0)
Sep  7 06:07:09 master sshd[4395]: Authentication refused: bad ownership or modes for file /home/hadoop/.ssh/authorized_keys
Sep  7 06:07:09 master sshd[4395]: Authentication refused: bad ownership or modes for file /home/hadoop/.ssh/authorized_keys

 改權限:

chmod 600 ~/.ssh/authorized_keys

 再測試,通過。

[root@master .ssh]# ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 10:e8:ce:b5:2f:d8:e7:18:82:5a:92:06:6a:0d:6d:c2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
[root@master ~]# 

 

4、公鑰分發給slave節點,從而master可以免密碼登錄到slave

[hadoop@master .ssh]$ scp id_dsa.pub [email protected]:/home/hadoop/master.id_dsa.pub

[hadoop@master .ssh]$ scp id_dsa.pub [email protected]:/home/hadoop/master.id_dsa.pub

 5、在slave節點上,將master的認證密鑰追加到authorized_keys

[hadoop@slave02 ~]$ cat master.id_dsa.pub > .ssh/authorized_keys

 6、從master主機ssh登錄slave

[hadoop@master .ssh]$ ssh slave01.hadoop
Last login: Sat Sep  7 06:15:35 2013 from localhost
[hadoop@slave01 ~]$ exit
logout
Connection to slave01.hadoop closed.
[hadoop@master .ssh]$ 
[hadoop@master .ssh]$ 
[hadoop@master .ssh]$ ssh slave02.hadoop
Last login: Sat Sep  7 06:16:24 2013 from localhost
[hadoop@slave02 ~]$ exit
logout
Connection to slave02.hadoop closed.

 不需要密碼了,done。

 注:目前實現的是master免密登錄slave,如果要實現各個節點彼此免密,可將各個節點的公鑰整合到一個臨時文件,然後將內容覆蓋到各個authorized_keys,省去很多分發、追加的操作。

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章