1. 解壓
[hdp01@hdp01 apps]$ tar zxvf zookeeper-3.4.10.tar.gz
軟鏈接 防止替換包,還得改配置
[hdp01@hdp01 apps]$ ln -s zookeeper-3.4.10 zookeeper
[hdp01@hdp01 apps]$ ls
apache-hive-2.3.6-bin hadoop-2.7.7 hive mysql-rpm zookeeper zookeeper-3.4.10
2. 配置環境變量
[hdp01@hdp01 zookeeper]$ vi ~/.bash_profile
export ZK_HOME=/home/hdp01/apps/zookeeper
export PATH=$PATH:$ZK_HOME/bin
[hdp01@hdp01 zookeeper]$ source ~/.bash_profile
3. 修改zookeeper配置文件
1. 先查看默認配置及說明
[hdp01@hdp01 conf]$ cp zoo_sample.cfg zoo.cfg
[hdp01@hdp01 conf]$ cat zoo.cfg
# The number of milliseconds of each tick
# 每一次心跳的時間間隔 2000ms
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
# 初始化的時候的默認的心跳次數
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
# 發送請求和接受應答之間的心跳次數
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
# 快照存儲目錄 核心的id識別文件存儲目錄
# 在整個zk集羣中 節點之間相互通信 通過自己約定的id進行通信的
# id是0-255之間數字 這個id是人爲的給定的 同一個集羣的多個節點的id不可衝突
# 這個id文件需要指定一個存儲目錄 下面指定的就是id文件的存儲目錄 一定不能使用/tmp
dataDir=/tmp/zookeeper
# the port at which the clients will connect
# 客戶端連接zk的端口 java api
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
# 客戶端連接的最大次數
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
#末尾追加寫zk集羣的節點(一個節點一行):
server.id=主機名:2888:3888
server:zk的服務節點
id:zk節點的id 0-255
主機名:id對應的主機名
2888: 心跳端口
3888: 選舉的端口
如:
server.1=hdp01:2888:3888
server.2=hdp02:2888:3888
server.5=hdp03:2888:3888
2. 修改zoo.cfg
#修改
dataDir=/home/hdp01/zookeeperdata
##添加
server.1=hdp01:2888:3888
server.2=hdp02:2888:3888
server.3=hdp03:2888:3888
3. 創建id文件
每個節點創建myid,文件名必須叫myid
myid文件中只能存儲當前節點的zk的id,不要有多餘的空格或換行
# 節點hdp01
[hdp01@hdp01 conf]$ mkdir ~/zookeeperdata
[hdp01@hdp01 conf]$ cd /home/hdp01/zookeeperdata
[hdp01@hdp01 zookeeperdata]$ vi myid
1
# 節點hdp02
[hdp01@hdp02 ~]$ mkdir ~/zookeeperdata
[hdp01@hdp02 ~]$ cd zookeeperdata/
[hdp01@hdp02 zookeeperdata]$ vi myid
2
# 節點hdp03
[hdp01@hdp03 ~]$ mkdir ~/zookeeperdata
[hdp01@hdp03 ~]$ cd zookeeperdata/
[hdp01@hdp03 zookeeperdata]$ vi myid
3
4. 拷貝已修改好配置文件的安裝包到其它節點
拷貝至其它節點
[hdp01@hdp01 apps]$ scp -r zookeeper-3.4.10 hdp02:/home/hdp01/apps/
[hdp01@hdp01 apps]$ scp -r zookeeper-3.4.10 hdp03:/home/hdp01/apps/
爲其它節點建立轉鏈接
# 節點hdp02
[hdp01@hdp02 apps]$ ln -s zookeeper-3.4.10 zookeeper
[hdp01@hdp02 apps]$ ls
hadoop-2.7.7 zookeeper zookeeper-3.4.10
# 節點hdp03
[hdp01@hdp03 apps]$ ln -s zookeeper-3.4.10 zookeeper
[hdp01@hdp03 apps]$ ls
hadoop-2.7.7 zookeeper zookeeper-3.4.10
修改其它節點環境變量
# 節點hdp02
[hdp01@hdp02 ~]$ vi .bash_profile
export ZK_HOME=/home/hdp01/apps/zookeeper
export PATH=$PATH:$ZK_HOME/bin
[hdp01@hdp02 ~]$ source .bash_profile
# 節點hdp03
[hdp01@hdp03 ~]$ vi .bash_profile
export ZK_HOME=/home/hdp01/apps/zookeeper
export PATH=$PATH:$ZK_HOME/bin
[hdp01@hdp03 ~]$ source .bash_profile
5. 啓動zookeeper
每一個節點,手動執行zkServer.sh start
jps
查看進程,每一個節點都有一個QuorumPeerMain進程
zkServer.sh status
查看zk狀態
# 查看進程
[hdp01@hdp01 ~]$ jps
8656 NodeManager
8369 DataNode
15220 QuorumPeerMain
8263 NameNode
15289 Jps
[hdp01@hdp01 ~]$
#查看zk 狀態
[hdp01@hdp01 ~]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hdp01/apps/zookeeper/bin/../conf/zoo.cfg
Mode: follower 從
[hdp01@hdp02 ~]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hdp01/apps/zookeeper/bin/../conf/zoo.cfg
Mode: leader 主
# 當zkServer.sh start只啓動了一個節點,這時候沒有選主,所以會有如下信息
[hdp01@hdp01 ~]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hdp01/apps/zookeeper/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
# 關閉集羣
[hdp01@hdp02 ~]$ zkServer.sh stop
ZooKeeper JMX enabled by default
Using config: /home/hdp01/apps/zookeeper/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
6. 啓動集羣時的選主過程
集羣啓動時的選主過程:
依據: myid文件中的id
機制: 小id自動投票給大id,得票過半即可當主
如:
hdp01—>myid=1
hdp02—>myid=2
hdp03—>myid=3
3臺機器,有一個機器獲得2票,這個節點就當主節點
- hdp01啓動的時候,找集羣中的leader,發現沒有,發起投票,將票投給了自己,hdp01獲取1票 沒有過半 無法選主的 這時候狀態無狀態
- hdp02啓動,尋找集羣中的leader,發現沒有,發起投票,id小的強制將票投給id大的節點,hdp01—2 hdp02----2 hdp02—獲取兩票 過半 hdp02稱爲集羣的leader hdp01切換自己的狀態follower
- hdp03啓動的時候,尋找集羣的leader,這時候發現已經有leader,自己啓動之後將自己的狀態切爲follower
如果調整啓動順序,hdp01—>hdp03–>hdp02,這時候主節點就是hdp03
所以全新的集羣(也就是集羣啓動時)選主有兩個因素決定: myid 和 啓動順序