hadoop單擊測試、僞分佈式、完全分佈式

一、實驗前提

三臺7.3的虛擬機:

server1

server2

server3

二、hadoop單擊測試

單機模式(standalone):單機模式是Hadoop的默認模式。這種模式在一臺單機上運行,沒有分佈式文件系統,而是直接讀寫本地操作系統的文件系統。

1.建立hadoop用戶:

[root@server1 ~]# useradd hadoop               
[root@server1 ~]# id hadoop              
uid=1000(hadoop) gid=1000(hadoop) groups=1000(hadoop)
[root@server1 ~]# passwd hadoop              
Changing password for user hadoop.
New password: 
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@server1 ~]# 

2.安裝hadoop和jdk

[root@server1 ~]# mv * /home/hadoop/
[root@server1 ~]# su - hadoop
[hadoop@server1 ~]$ tar zxf hadoop-3.0.3.tar.gz 
[hadoop@server1 ~]$ tar zxf jdk-8u181-linux-x64.tar.gz 
[hadoop@server1 ~]$ ln -s hadoop-3.0.3 hadoop
[hadoop@server1 ~]$ ln -s jdk1.8.0_181/ java
[hadoop@server1 ~]$ cd hadoop
[hadoop@server1 hadoop]$ ls
[hadoop@server1 hadoop]$ cd etc/
[hadoop@server1 etc]$ ls
hadoop
[hadoop@server1 etc]$ cd hadoop/
[hadoop@server1 hadoop]$ ls
[hadoop@server1 hadoop]$ vim hadoop-env.sh

 54 export JAVA_HOME=/home/hadoop/java

配置環境變量:

[hadoop@server1 hadoop]$ cd
[hadoop@server1 ~]$ cd java/
[hadoop@server1 java]$ vim ~/.bash_profile #配置環境變量
PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/java/bin

[hadoop@server1 java]$ source ~/.bash_profile 

[hadoop@server1 java]$ jps #查看java進程
10452 Jps

3.測試:

[hadoop@server1 ~]$ cd hadoop
[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop
[hadoop@server1 hadoop]$ mkdir input
[hadoop@server1 hadoop]$ cp etc/hadoop/*.xml input/
[hadoop@server1 hadoop]$ ls input/
capacity-scheduler.xml  hdfs-site.xml    kms-site.xml
core-site.xml           httpfs-site.xml  mapred-site.xml
hadoop-policy.xml       kms-acls.xml     yarn-site.xml
[hadoop@server1 hadoop]$
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output 'dfs[a-z.]+'

[hadoop@server1 hadoop]$ cd output/
[hadoop@server1 output]$ ls
part-r-00000  _SUCCESS
[hadoop@server1 output]$ cat part-r-00000 
1	dfsadmin

三、僞分佈式

這種模式也是在一臺單機上運行,但用不同的Java進程模仿分佈式運行中的各類結點。

沒有所謂的在多臺機器上進行真正的分佈式計算,故稱爲"僞分佈式"。

 僞分佈模式在“單節點集羣”上運行Hadoop,其中所有的守護進程都運行在同一臺機器上。該模式在單機模式之上增加了代碼調試功能,允許你檢查內存使用情況,HDFS輸入輸出,以及其他的守護進程交互。

1.編輯配置文件

[hadoop@server1 ~]$ cd hadoop
[hadoop@server1 hadoop]$ cd etc/hadoop/
[hadoop@server1 hadoop]$ vim core-site.xml 

 19 <configuration>
 20     <property>
 21         <name>fs.defaultFS</name>
 22         <value>hdfs://172.25.60.1:9000</value>
 23     </property>
 24 </configuratioN>

[hadoop@server1 hadoop]$ vim hdfs-site.xml 

 20     <property>
 21         <name>dfs.replication</name>
 22         <value>1</value>   ##自己充當節點
 23     </property>

2.爲了方便,設置免密

[hadoop@server1 hadoop]$ cd
[hadoop@server1 ~]$ ssh-keygen 
[hadoop@server1 ~]$ ssh-copy-id 172.25.60.1
[hadoop@server1 ~]$ ssh-copy-id localhost
[hadoop@server1 ~]$ ssh-copy-id server1

3.格式化,並開啓服務

[hadoop@server1 ~]$ cd hadoop
[hadoop@server1 hadoop]$ bin/hdfs namenode -format

[hadoop@server1 hadoop]$  cd sbin/
[hadoop@server1 sbin]$ ./start-dfs.sh 
Starting namenodes on [server1]
Starting datanodes
Starting secondary namenodes [server1]
[hadoop@server1 sbin]$ jps
4417 DataNode
4314 NameNode
4602 SecondaryNameNode
4746 Jps

4.打開瀏覽器:http://172.25.60.1:9000

http://172.25.60.1:9870

5.測試:創建目錄,並上傳

[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir -p /user/hadoop
[hadoop@server1 hadoop]$ bin/hdfs dfs -ls
[hadoop@server1 hadoop]$ bin/hdfs dfs -put input/
[hadoop@server1 hadoop]$ bin/hdfs dfs -ls
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2019-05-23 02:41 input

刷新瀏覽器:

點擊:Browse the system

點擊:user-->hadoop-->input

6.刪除input和output文件,重新執行命令

[hadoop@server1 hadoop]$ rm -fr input/ output/
[hadoop@server1 hadoop]$ ls
bin  include  libexec      logs        README.txt  share
etc  lib      LICENSE.txt  NOTICE.txt  sbin

[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar wordcount input output
#計算文件裏的每個單詞出現的次數

此時input和output不會出現在當前目錄下,而是上傳到了分佈式文件系統中,網頁上可以看到

刷新瀏覽器

點擊output-->_SUCCESS(如果要查看結果,就可以點擊下載)

命令查看:

[hadoop@server1 hadoop]$ bin/hdfs dfs -cat output/*

[hadoop@server1 hadoop]$ bin/hdfs dfs -get output  #從分佈式系統中get下來output目錄
[hadoop@server1 hadoop]$ cd output/
[hadoop@server1 output]$ ls
part-r-00000  _SUCCESS
[hadoop@server1 output]$ cat part-r-00000 

四、完全分佈式

真正的分佈式,由3個及以上的實體機或者虛擬機組件的集羣。Hadoop守護進程運行在一個集羣上。

1.清除原來的數據

[hadoop@server1 hadoop]$ sbin/stop-dfs.sh 
Stopping namenodes on [server1]
Stopping datanodes
Stopping secondary namenodes [server1]
[hadoop@server1 hadoop]$ cd /tmp/
[hadoop@server1 tmp]$ ls
hadoop  hadoop-hadoop  hsperfdata_hadoop
[hadoop@server1 tmp]$ rm -fr *
[hadoop@server1 tmp]$ logout

2.新開兩個虛擬機,當做新節點

[root@server2 ~]# useradd -u 1000 hadoop
[root@server3 ~]# useradd -u 1000 hadoop

3.安裝nfs

[root@server1 ~]# yum install -y nfs-utils
[root@server2 ~]# yum install -y nfs-utils
[root@server3 ~]# yum install -y nfs-utils

[root@server1 ~]# systemctl start rpcbind
[root@server1 ~]# systemctl is-enabled rpcbind
indirect

[root@server2 ~]# systemctl start rpcbind
[root@server3 ~]# systemctl start rpcbind

4.在server1上開啓服務,配置

[root@server1 ~]# systemctl start nfs-server
[root@server1 ~]# vim /etc/exports
/home/hadoop   *(rw,anonuid=1000,anongid=1000)

[root@server1 ~]# exportfs -rv
exporting *:/home/hadoop

[root@server1 ~]# exportfs -v
/home/hadoop  	<world>(rw,wdelay,root_squash,no_subtree_check,anonuid=1000,anongid=1000,sec=sys,rw,secure,root_squash,no_all_squash)

5.server2/server3:掛載nfs

[root@server2 ~]# showmount -e server1
Export list for server1:
/home/hadoop *

[root@server2 ~]# mount 172.25.60.1:/home/hadoop/ /home/hadoop/
[root@server2 ~]# df
172.25.60.1:/home/hadoop  17811456 2817792  14993664  16% /home/hadoop

[root@server3 ~]# showmount -e server1
Export list for server1:
/home/hadoop *
[root@server3 ~]# mount 172.25.60.1:/home/hadoop/ /home/hadoop/
[root@server3 ~]# df

6.此時發現可以免密登錄(因爲是掛載上的)

[hadoop@server1 ~]$ ssh 172.25.60.2
Last login: Thu May 23 06:00:34 2019 from server1
[hadoop@server2 ~]$ ssh 172.25.60.3
Last login: Thu May 23 06:07:42 2019 from server1

7.重新編輯文件

[hadoop@server1 ~]$ cd hadoop
[hadoop@server1 hadoop]$ ls
bin  include  libexec      logs        output      sbin
etc  lib      LICENSE.txt  NOTICE.txt  README.txt  share
[hadoop@server1 hadoop]$ cd etc/hadoop/
[hadoop@server1 hadoop]$ vim workers

172.25.60.2
172.25.60.3

修改節點數:

[hadoop@server3 hadoop]$ vim hdfs-site.xml
        <value>2</value>

在一個地方編輯,其他節點都有了:

8.格式化,並啓動服務

[hadoop@server1 hadoop]$ bin/hdfs namenode -format
[hadoop@server1 hadoop]$ sbin/start-dfs.sh 
Starting namenodes on [server1]
Starting datanodes
Starting secondary namenodes [server1]

從節點可以看到datanode信息:

[hadoop@server2 ~]$ jps
10834 Jps
10772 DataNode
[hadoop@server3 ~]$ jps
10842 Jps
10780 DataNode

9.測試:刷新瀏覽器,點擊Datanodes,有兩個節點

[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir -p /user/hadoop
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir input
[hadoop@server1 hadoop]$ bin/hdfs dfs -put etc/hadoop/*.xml input
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output 'dfs[a-z.]+'

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章