Zookeeper集羣安裝

  1. 在根目錄創建zookeeper文件夾(service1、service2、service3都創建)

[root@localhost /]# mkdir zookeeper

通過Xshell上傳文件到service1服務器:上傳zookeeper-3.4.6.tar.gz到/software文件夾

2.遠程copy將service1下的/software/zookeeper-3.4.6.tar.gz到service2、service3

[root@localhost software]# scp -r /software/zookeeper-3.4.6.tar.gz [email protected]:/software/

[root@localhost software]# scp -r /software/zookeeper-3.4.6.tar.gz [email protected]:/software/

3.copy /software/zookeeper-3.4.6.tar.gz到/zookeeper/目錄(service1、service2、service3都執行)

[root@localhost software]# cp /software/zookeeper-3.4.6.tar.gz /zookeeper/

4.安裝解壓zookeeper-3.4.6.tar.gz(service1、service2、service3都執行)

[root@localhost /]# cd /zookeeper/

[root@localhost zookeeper]# tar -zxvf zookeeper-3.4.6.tar.gz

5.在/zookeeper創建兩個目錄:zkdata、zkdatalog(service1、service2、service3都創建)

[root@localhost zookeeper]# mkdir zkdata

[root@localhost zookeeper]# mkdir zkdatalog

6.進入/zookeeper/zookeeper-3.4.6/conf/目錄

[root@localhost zookeeper]# cd /zookeeper/zookeeper-3.4.6/conf/

[root@localhost conf]# ls

configuration.xsl log4j.properties zoo.cfg zoo_sample.cfg

  1. 修改zoo.cfg文件

The number of milliseconds of each tick

tickTime=2000

The number of ticks that the initial

synchronization phase can take

initLimit=10

The number of ticks that can pass between

sending a request and getting an acknowledgement

syncLimit=5

the directory where the snapshot is stored.

do not use /tmp for storage, /tmp here is just

example sakes.

dataDir=/zookeeper/zkdata

dataLogDir=/zookeeper/zkdatalog

the port at which the clients will connect

clientPort=2181

the maximum number of client connections.

increase this if you need to handle more clients

#maxClientCnxns=60

Be sure to read the maintenance section of the

administrator guide before turning on autopurge.

The number of snapshots to retain in dataDir

autopurge.snapRetainCount=3

Purge task interval in hours

Set to "0" to disable auto purge feature

#autopurge.purgeInterval=1

server.1=192.168.2.211:12888:13888

server.2=192.168.2.212:12888:13888

server.3=192.168.2.213:12888:13888

  1. 同步修改service2、service3的zoo.cfg配置

  2. myid文件寫入(進入/zookeeper/zkdata目錄下)

[root@localhost /]# cd /zookeeper/zkdata

[root@localhost /]# echo 1 > myid

  1. myid文件寫入service2、service3

echo 2 > myid

echo 3 > myid

11.查看zk命令:

[root@localhost ~]# cd /zookeeper/zookeeper-3.4.6/bin/

[root@localhost bin]# ls

README.txt zkCleanup.sh zkCli.cmd zkCli.sh zkEnv.cmd zkEnv.sh zkServer.cmd zkServer.sh zookeeper.out

12.執行zkServer.sh查看詳細命令:

[root@localhost bin]# ./zkServer.sh

JMX enabled by default

Using config: /zookeeper/zookeeper-3.4.6/bin/../conf/zoo.cfg

Usage: ./zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd}

  1. 在service1、service2、service3分別啓動zk服務

[root@localhost bin]# ./zkServer.sh start

  1. jps查看zk進程

[root@localhost bin]# jps

31483 QuorumPeerMain

31664 Jps

  1. 分別在service1、service2、service3查看zk狀態(可以看到leader和follower節點)

[root@localhost bin]# ./zkServer.sh status

JMX enabled by default

Using config: /zookeeper/zookeeper-3.4.6/bin/../conf/zoo.cfg

Mode: follower

[root@localhost bin]# ./zkServer.sh status

JMX enabled by default

Using config: /zookeeper/zookeeper-3.4.6/bin/../conf/zoo.cfg

Mode: leader

  1. 看到leader和follower節點已經安裝成功

分佈式的一些解決方案,有願意瞭解的朋友可以找我們團隊探討 。

完整的項目源碼來源歡迎大家一起學習研究相關技術,源碼獲取直接求求: 2042849237

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章