Hbase-2.2.2源碼編譯與安裝

編譯

  • 下載Hbase-2.2.0源碼

    https://mirrors.tuna.tsinghua.edu.cn/apache/hbase/2.2.2/hbase-2.2.2-src.tar.gz

  • 安裝Maven環境,配置M2_HOME

JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_221.jdk/Contents/Home
M2_HOME=/Users/admin/Software/apache-maven-3.6.3
PATH=$PATH:$JAVA_HOME/bin:$M2_HOME/bin
CLASSPATH=.
export PATH
export JAVA_HOME
export CLASSPATH
  • 修改hbase-2.2.2/pom.xml
 <hadoop-two.version>2.9.2</hadoop-two.version>
  • 進入到hbase解壓目錄執行
localhost:hbase-2.2.2 jiangzz$ mvn clean package -DskipTests assembly:single
...漫長的等待...
[INFO] Reactor Summary for Apache HBase 2.2.2:
[INFO] 
[INFO] Apache HBase ....................................... SUCCESS [  1.541 s]
[INFO] Apache HBase - Checkstyle .......................... SUCCESS [  0.375 s]
[INFO] Apache HBase - Annotations ......................... SUCCESS [  0.704 s]
[INFO] Apache HBase - Build Configuration ................. SUCCESS [  0.038 s]
[INFO] Apache HBase - Shaded Protocol ..................... SUCCESS [ 13.989 s]
[INFO] Apache HBase - Common .............................. SUCCESS [  4.878 s]
[INFO] Apache HBase - Metrics API ......................... SUCCESS [  0.590 s]
[INFO] Apache HBase - Hadoop Compatibility ................ SUCCESS [  0.768 s]
[INFO] Apache HBase - Metrics Implementation .............. SUCCESS [  0.695 s]
[INFO] Apache HBase - Hadoop Two Compatibility ............ SUCCESS [  1.237 s]
[INFO] Apache HBase - Protocol ............................ SUCCESS [  4.023 s]
[INFO] Apache HBase - Client .............................. SUCCESS [  3.999 s]
[INFO] Apache HBase - Zookeeper ........................... SUCCESS [  0.948 s]
[INFO] Apache HBase - Replication ......................... SUCCESS [  0.720 s]
[INFO] Apache HBase - Resource Bundle ..................... SUCCESS [  0.075 s]
[INFO] Apache HBase - HTTP ................................ SUCCESS [  2.180 s]
[INFO] Apache HBase - Procedure ........................... SUCCESS [  1.286 s]
[INFO] Apache HBase - Server .............................. SUCCESS [ 15.627 s]
[INFO] Apache HBase - MapReduce ........................... SUCCESS [  2.975 s]
[INFO] Apache HBase - Testing Util ........................ SUCCESS [  1.647 s]
[INFO] Apache HBase - Thrift .............................. SUCCESS [  4.826 s]
[INFO] Apache HBase - RSGroup ............................. SUCCESS [  2.041 s]
[INFO] Apache HBase - Shell ............................... SUCCESS [  1.708 s]
[INFO] Apache HBase - Coprocessor Endpoint ................ SUCCESS [  2.231 s]
[INFO] Apache HBase - Integration Tests ................... SUCCESS [  2.664 s]
[INFO] Apache HBase - Rest ................................ SUCCESS [  2.785 s]
[INFO] Apache HBase - Examples ............................ SUCCESS [  1.976 s]
[INFO] Apache HBase - Shaded .............................. SUCCESS [  0.111 s]
[INFO] Apache HBase - Shaded - Client (with Hadoop bundled) SUCCESS [ 10.319 s]
[INFO] Apache HBase - Shaded - Client ..................... SUCCESS [  6.292 s]
[INFO] Apache HBase - Shaded - MapReduce .................. SUCCESS [  8.319 s]
[INFO] Apache HBase - External Block Cache ................ SUCCESS [  0.847 s]
[INFO] Apache HBase - HBTop ............................... SUCCESS [  0.867 s]
[INFO] Apache HBase - Assembly ............................ SUCCESS [ 40.614 s]
[INFO] Apache HBase - Shaded - Testing Util ............... SUCCESS [ 24.383 s]
[INFO] Apache HBase - Shaded - Testing Util Tester ........ SUCCESS [  0.998 s]
[INFO] Apache HBase Shaded Packaging Invariants ........... SUCCESS [  0.764 s]
[INFO] Apache HBase Shaded Packaging Invariants (with Hadoop bundled) SUCCESS [  0.418 s]
[INFO] Apache HBase - Archetypes .......................... SUCCESS [  0.017 s]
[INFO] Apache HBase - Exemplar for hbase-client archetype . SUCCESS [  1.208 s]
[INFO] Apache HBase - Exemplar for hbase-shaded-client archetype SUCCESS [  1.061 s]
[INFO] Apache HBase - Archetype builder ................... SUCCESS [  0.342 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  02:54 min
[INFO] Finished at: 2020-01-06T22:01:37+08:00
[INFO] ------------------------------------------------------------------------
  • 編譯完成後在hbase-assembly/target/hbase-2.2.2-bin.tar.gz

Hbase安裝

HDFS基本環境(存儲)

1,安裝JDK,配置環境變量JAVA_HOME

[root@CentOS ~]# rpm -ivh jdk-8u171-linux-x64.rpm 
Preparing...                          ################################# [100%]
Updating / installing...
   1:jdk1.8-2000:1.8.0_171-fcs        ################################# [100%]
Unpacking JAR files...
        tools.jar...
        plugin.jar...
        javaws.jar...
        deploy.jar...
        rt.jar...
        jsse.jar...
        charsets.jar...
        localedata.jar...
[root@CentOS ~]# vi .bashrc

JAVA_HOME=/usr/java/latest
CLASSPATH=.
PATH=$PATH:$JAVA_HOME/bin
export JAVA_HOME
export CLASSPATH
export PATH  

[root@CentOS ~]# source .bashrc 
[root@CentOS ~]# jps
1933 Jps

2,關閉防火牆

[root@CentOS ~]# systemctl stop firewalld # 關閉 服務
[root@CentOS ~]# systemctl disable firewalld # 關閉開機自啓動
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@CentOS ~]# firewall-cmd --state # 查看防火牆狀態
not running

3,配置主機名和IP映射關係

[root@CentOS ~]# cat /etc/hostname 
CentOS
[root@CentOS ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.186.150 CentOS

4,配置SSH免密碼登錄

[root@CentOS ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:6yYiypvclJAZLU2WHvzakxv6uNpsqpwk8kzsjLv3yJA root@CentOS
The key's randomart image is:
+---[RSA 2048]----+
|  .o.            |
|  =+             |
| o.oo            |
|  =. .           |
| +  o . S        |
| o...=   .       |
|E.oo. + .        |
|BXX+o....        |
|B#%O+o o.        |
+----[SHA256]-----+
[root@CentOS ~]# ssh-copy-id CentOS
[root@CentOS ~]# ssh CentOS
Last failed login: Mon Jan  6 14:30:49 CST 2020 from centos on ssh:notty
There was 1 failed login attempt since the last successful login.
Last login: Mon Jan  6 14:20:27 2020 from 192.168.186.1

5,上傳Hadoop安裝包,並解壓到/usr目錄

[root@CentOS ~]# tar -zxf  hadoop-2.9.2.tar.gz -C /usr/

6,配置HADOOP_HOME環境變量

[root@CentOS ~]# vi .bashrc
HADOOP_HOME=/usr/hadoop-2.9.2
JAVA_HOME=/usr/java/latest
CLASSPATH=.
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export JAVA_HOME
export CLASSPATH
export PATH
export HADOOP_HOME
[root@CentOS ~]# source .bashrc    
[root@CentOS ~]# hadoop classpath #打印Hadoop的類路徑
/usr/hadoop-2.9.2/etc/hadoop:/usr/hadoop-2.9.2/share/hadoop/common/lib/*:/usr/hadoop-2.9.2/share/hadoop/common/*:/usr/hadoop-2.9.2/share/hadoop/hdfs:/usr/hadoop-2.9.2/share/hadoop/hdfs/lib/*:/usr/hadoop-2.9.2/share/hadoop/hdfs/*:/usr/hadoop-2.9.2/share/hadoop/yarn:/usr/hadoop-2.9.2/share/hadoop/yarn/lib/*:/usr/hadoop-2.9.2/share/hadoop/yarn/*:/usr/hadoop-2.9.2/share/hadoop/mapreduce/lib/*:/usr/hadoop-2.9.2/share/hadoop/mapreduce/*:/usr/hadoop-2.9.2/contrib/capacity-scheduler/*.jar

7,修改core-site.xml

[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/core-site.xml
<!--nn訪問入口-->
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://CentOS:9000</value>
</property>
<!--hdfs工作基礎目錄-->
<property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/hadoop-2.9.2/hadoop-${user.name}</value>
</property>

8,修改hdfs-site.xml

[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/hdfs-site.xml 
<!--block副本因子-->
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
<!--配置Sencondary namenode所在物理主機-->
<property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>CentOS:50090</value>
</property>
<!--設置datanode最大文件操作數-->
<property>
        <name>dfs.datanode.max.xcievers</name>
        <value>4096</value>
</property>
<!--設置datanode並行處理能力-->
<property>
        <name>dfs.datanode.handler.count</name>
        <value>6</value>
</property>

9,修改slaves

[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/slaves 
CentOS

10,格式化NameNode,生成fsimage

[root@CentOS ~]# hdfs namenode -format
[root@CentOS ~]# yum install -y tree
[root@CentOS ~]# tree /usr/hadoop-2.9.2/hadoop-root/
/usr/hadoop-2.9.2/hadoop-root/
└── dfs
    └── name
        └── current
            ├── fsimage_0000000000000000000
            ├── fsimage_0000000000000000000.md5
            ├── seen_txid
            └── VERSION

3 directories, 4 files

11,啓動HDFS服務

[root@CentOS ~]# start-dfs.sh 

Zookeeper安裝(協調)

1,上傳zookeeper的安裝包,並解壓在/usr目錄下

[root@CentOS ~]# tar -zxf zookeeper-3.4.12.tar.gz -C /usr/

2,配置Zookepeer的zoo.cfg

[root@CentOS ~]# tar -zxf zookeeper-3.4.12.tar.gz -C /usr/
[root@CentOS ~]# cd /usr/zookeeper-3.4.12/
[root@CentOS zookeeper-3.4.12]# cp conf/zoo_sample.cfg conf/zoo.cfg
[root@CentOS zookeeper-3.4.12]# vi conf/zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/root/zkdata
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

3,創建zookeeper的數據目錄

[root@CentOS ~]# mkdir /root/zkdata

4,啓動zookeeper服務

[root@CentOS ~]# cd /usr/zookeeper-3.4.12/
[root@CentOS zookeeper-3.4.12]# ./bin/zkServer.sh start zoo.cfg
ZooKeeper JMX enabled by default
Using config: /usr/zookeeper-3.4.12/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@CentOS zookeeper-3.4.12]# ./bin/zkServer.sh status zoo.cfg
ZooKeeper JMX enabled by default
Using config: /usr/zookeeper-3.4.12/bin/../conf/zoo.cfg
Mode: standalone

Hbase配置與安裝(數據庫服務)

1,上傳Hbase安裝包,並解壓到/usr目錄下

[root@CentOS ~]# tar -zxf hbase-2.2.2-bin.tar.gz -C /usr/

2,配置Hbase環境變量HBASE_HOME

[root@CentOS ~]# vi .bashrc 
HBASE_HOME=/usr/hbase-2.2.2
HADOOP_HOME=/usr/hadoop-2.9.2
JAVA_HOME=/usr/java/latest
CLASSPATH=.
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin
export JAVA_HOME
export CLASSPATH
export PATH
export HADOOP_HOME
export HBASE_HOME

[root@CentOS ~]# source .bashrc 
[root@CentOS ~]# hbase classpath # 測試Hbase是否識別Hadoop
/usr/hbase-2.2.2/bin/../conf:/usr/java/latest/lib/tools.jar:/usr/hbase-2.2.2/bin/..:/usr/hbase-2.2.2/bin/../lib/shaded-clients/hbase-shaded-client-byo-hadoop-2.2.2.jar:/usr/hbase-2.2.2/bin/../lib/client-facing-thirdparty/audience-annotations-0.5.0.jar:/usr/hbase-2.2.2/bin/../lib/client-facing-thirdparty/commons-logging-1.2.jar:/usr/hbase-2.2.2/bin/../lib/client-facing-thirdparty/findbugs-annotations-1.3.9-1.jar:/usr/hbase-2.2.2/bin/../lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar:/usr/hbase-2.2.2/bin/../lib/client-facing-thirdparty/log4j-1.2.17.jar:/usr/hbase-2.2.2/bin/../lib/client-facing-thirdparty/slf4j-api-1.7.25.jar:/usr/hadoop-2.9.2/etc/hadoop:/usr/hadoop-2.9.2/share/hadoop/common/lib/*:/usr/hadoop-2.9.2/share/hadoop/common/*:/usr/hadoop-2.9.2/share/hadoop/hdfs:/usr/hadoop-2.9.2/share/hadoop/hdfs/lib/*:/usr/hadoop-2.9.2/share/hadoop/hdfs/*:/usr/hadoop-2.9.2/share/hadoop/yarn:/usr/hadoop-2.9.2/share/hadoop/yarn/lib/*:/usr/hadoop-2.9.2/share/hadoop/yarn/*:/usr/hadoop-2.9.2/share/hadoop/mapreduce/lib/*:/usr/hadoop-2.9.2/share/hadoop/mapreduce/*:/usr/hadoop-2.9.2/contrib/capacity-scheduler/*.jar

3,配置hbase-site.xml

[root@CentOS ~]# cd /usr/hbase-2.2.2/
[root@CentOS hbase-2.2.2]# vi conf/hbase-site.xml
<property>
    <name>hbase.rootdir</name>
    <value>hdfs://CentOS:9000/hbase</value>
</property>
<property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
</property>
<property>
    <name>hbase.zookeeper.quorum</name>
    <value>CentOS</value>
</property>
<property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
</property>
<property>
        <name>hbase.unsafe.stream.capability.enforce</name>
        <value>false</value>
</property>

4,修改hbase-env.sh,將HBASE_MANAGES_ZK修改爲false

[root@CentOS ~]# cd /usr/hbase-2.2.2/
[root@CentOS hbase-2.2.2]# grep -i HBASE_MANAGES_ZK conf/hbase-env.sh 
# export HBASE_MANAGES_ZK=true
[root@CentOS hbase-2.2.2]# vi conf/hbase-env.sh 
export HBASE_MANAGES_ZK=false
[root@CentOS hbase-2.2.2]# grep -i HBASE_MANAGES_ZK conf/hbase-env.sh 
export HBASE_MANAGES_ZK=false

export HBASE_MANAGES_ZK=false告知Hbase,使用外部Zookeeper

5,啓動Hbase

[root@CentOS hbase-2.2.2]# ./bin/start-hbase.sh 
starting master, logging to /usr/hbase-2.2.2/logs/hbase-root-master-CentOS.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
starting regionserver, logging to /usr/hbase-2.2.2/logs/hbase-root-1-regionserver-CentOS.out
[root@CentOS hbase-2.2.2]# jps
3090 NameNode
5027 HMaster
3188 DataNode
5158 HRegionServer
3354 SecondaryNameNode
5274 Jps
3949 QuorumPeerMain
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章