k8s實戰案例之部署Zookeeper集羣

1、Zookeeper簡介

zookeeper是一個開源的分佈式協調服務,由知名互聯網公司Yahoo創建,它是Chubby的開源實現;換句話講,zookeeper是一個典型的分佈式數據一致性解決方案,分佈式應用程序可以基於它實現數據的發佈/訂閱、負載均衡、名稱服務、分佈式協調/通知、集羣管理、Master選舉、分佈式鎖和分佈式隊列;

2、PV/PVC及zookeeper

3、構建zookeeper鏡像

3.1、下載java環境基礎鏡像,將對應鏡像修改本地harbor地址上傳至harbor

3.2 基於java環境基礎鏡像,構建zookeeper鏡像

root@k8s-master01:~/k8s-data/dockerfile/web/magedu/zookeeper# ll
total 36900
drwxr-xr-x  4 root root     4096 Jun  4 11:03 ./
drwxr-xr-x 11 root root     4096 Aug  9  2022 ../
-rw-r--r--  1 root root     1954 Jun  4 10:57 Dockerfile
-rw-r--r--  1 root root    63587 Jun 22  2021 KEYS
drwxr-xr-x  2 root root     4096 Jun  4 10:11 bin/
-rwxr-xr-x  1 root root      251 Jun  4 10:58 build-command.sh*
drwxr-xr-x  2 root root     4096 Jun  4 10:11 conf/
-rwxr-xr-x  1 root root     1156 Jun  4 11:03 entrypoint.sh*
-rw-r--r--  1 root root       91 Jun 22  2021 repositories
-rw-r--r--  1 root root     2270 Jun 22  2021 zookeeper-3.12-Dockerfile.tar.gz
-rw-r--r--  1 root root 37676320 Jun 22  2021 zookeeper-3.4.14.tar.gz
-rw-r--r--  1 root root      836 Jun 22  2021 zookeeper-3.4.14.tar.gz.asc
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/zookeeper# cat Dockerfile 
FROM harbor.ik8s.cc/baseimages/slim_java:8 

ENV ZK_VERSION 3.4.14
ADD repositories /etc/apk/repositories 
# Download Zookeeper
COPY zookeeper-3.4.14.tar.gz /tmp/zk.tgz
COPY zookeeper-3.4.14.tar.gz.asc /tmp/zk.tgz.asc
COPY KEYS /tmp/KEYS
RUN apk add --no-cache --virtual .build-deps \
      ca-certificates   \
      gnupg             \
      tar               \
      wget &&           \
    #
    # Install dependencies
    apk add --no-cache  \
      bash &&           \
    #
    #
    # Verify the signature
    export GNUPGHOME="$(mktemp -d)" && \
    gpg -q --batch --import /tmp/KEYS && \
    gpg -q --batch --no-auto-key-retrieve --verify /tmp/zk.tgz.asc /tmp/zk.tgz && \
    #
    # Set up directories
    #
    mkdir -p /zookeeper/data /zookeeper/wal /zookeeper/log && \
    #
    # Install
    tar -x -C /zookeeper --strip-components=1 --no-same-owner -f /tmp/zk.tgz && \
    #
    # Slim down
    cd /zookeeper && \
    cp dist-maven/zookeeper-${ZK_VERSION}.jar . && \
    rm -rf \
      *.txt \
      *.xml \
      bin/README.txt \
      bin/*.cmd \
      conf/* \
      contrib \
      dist-maven \
      docs \
      lib/*.txt \
      lib/cobertura \
      lib/jdiff \
      recipes \
      src \
      zookeeper-*.asc \
      zookeeper-*.md5 \
      zookeeper-*.sha1 && \
    #
    # Clean up
    apk del .build-deps && \
    rm -rf /tmp/* "$GNUPGHOME"
# 拷貝配置文件和腳本
COPY conf /zookeeper/conf/
COPY bin/zkReady.sh /zookeeper/bin/
COPY entrypoint.sh /

ENV PATH=/zookeeper/bin:${PATH} \
    ZOO_LOG_DIR=/zookeeper/log \
    ZOO_LOG4J_PROP="INFO, CONSOLE, ROLLINGFILE" \
    JMXPORT=9010
# 啓動zookeeper,entrypoint 腳本和cmd聯合使用,entrypoint會把cmd當作參數傳遞給entrypoint腳本執行,即entrypoint腳本通常用來做一些環境初始化;比如這裏就是用來在線生成zk集羣配置
ENTRYPOINT [ "/entrypoint.sh" ]

CMD [ "zkServer.sh", "start-foreground" ]

EXPOSE 2181 2888 3888 9010
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/zookeeper# cat conf/zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/zookeeper/data
dataLogDir=/zookeeper/wal
#snapCount=100000
autopurge.purgeInterval=1
clientPort=2181
quorumListenOnAllIPs=trueroot@k8s-master01:~/k8s-data/dockerfile/web/magedu/zookeeper# cat entrypoint.sh 
#!/bin/bash
# 生成zk集羣配置
echo ${MYID:-1} > /zookeeper/data/myid #將$MYID的值寫入MYID文件,如果變量爲空就默認爲1,$MYID爲pod中的系統級別環境變量,該變量由部署zk清單中的環境變量指定

if [ -n "$SERVERS" ]; then #如果$SERVERS不爲空則向下執行,SERVERS爲pod中的系統級別環境變量,該變量同樣也是在zk部署清單中指定的環境變量的
 IFS=\, read -a servers <<<"$SERVERS"  #IFS爲bash內置變量用於分割字符並將結果形成一個數組
 for i in "${!servers[@]}"; do #${!servers[@]}表示獲取servers中每個元素的索引值,此索引值會用做當前ZK的ID
  printf "\nserver.%i=%s:2888:3888" "$((1 + $i))" "${servers[$i]}" >> /zookeeper/conf/zoo.cfg #打印結果並輸出重定向到文件/zookeeper/conf/zoo.cfg,其中%i和%s的值來分別自於後面變量"$((1 + $i))" "${servers[$i]}"
 done
fi

cd /zookeeper
exec "$@" #$@變量用於引用給腳本傳遞的所有參數,傳遞的所有參數會被作爲一個數組列表,exec爲終止當前進程、保留當前進程id、新建一個進程執行新的任務,即CMD [ "zkServer.sh", "start-foreground" ]
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/zookeeper# cat build-command.sh 
#!/bin/bash
TAG=$1
#docker build -t harbor.ik8s.cc/magedu/zookeeper:${TAG} .
#sleep 1
#docker push  harbor.ik8s.cc/magedu/zookeeper:${TAG}

nerdctl  build -t harbor.ik8s.cc/magedu/zookeeper:${TAG} .
nerdctl push harbor.ik8s.cc/magedu/zookeeper:${TAG}
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/zookeeper# 

4、測試zookeeper鏡像

4.1、驗證zk鏡像是否上傳至harbor?

4.2、將zk鏡像運行爲容器?看看是否可正常運行?

root@k8s-node01:~# nerdctl run -it --rm -p 2181:2181 harbor.ik8s.cc/magedu/zookeeper:v3.4.14
WARN[0000] skipping verifying HTTPS certs for "harbor.ik8s.cc" 
harbor.ik8s.cc/magedu/zookeeper:v3.4.14:                                          resolved       |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:6d0b49e75ea87a67c283a182a6addd53e495ab12c1d5c5a4d981ae655934c4ae: done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:f8bfac50f84cc3f77a58a8d5bb5c47cf3f2a05f543ad7774306d485f09edacdf:   done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:88286f41530e93dffd4b964e1db22ce4939fffa4a4c665dab8591fbab03d4926:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:7faf2f47dd69ec7a0e44611703ec6c400c3ca9248b306dc8c4928fecdf81cf85:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:7141511c4dad1bb64345a3cd38e009b1bcd876bba3e92be040ab2602e9de7d1e:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:c2ac2f7a5c66a805ac53df2ac25e747244602b7cde77d854b0ff00a47ec8640f:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:3cf9e52a32f5617781d616893614fd88d791040b7a78bddc833f54a7d4362ba5:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:03aad7531339cf92a0feff169cef562fa9aa62f4eb3c1090968f36280522485c:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:11620dde2c1fc91a436998729ae0f0f0bcff774f7c32c2ef0910ff4125761c2c:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:fd529fe251b34db45de24e46ae4d8f57c5b8bbcfb1b8d8c6fb7fa9fcdca8905e:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:551a566b3c61bcc1fbdfbd8f4e67afd7b63fa0b1651c461d419619f08df6599c:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:bc2237b8828aac97e90ba3b47f00f153338890638e4cfc3742c22cb92f4fd1f9:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:0221a34207fe67c83ac7b33fb5be169754666ab4841abb0c050d2ff0022c1d1a:    done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 7.9 s                                                                    total:  101.0  (12.8 MiB/s)                                      
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
2023-06-04 11:11:58,379 [myid:] - INFO  [main:QuorumPeerConfig@136] - Reading configuration from: /zookeeper/bin/../conf/zoo.cfg
2023-06-04 11:11:58,384 [myid:] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
2023-06-04 11:11:58,384 [myid:] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
2023-06-04 11:11:58,385 [myid:] - WARN  [main:QuorumPeerMain@116] - Either no config or no quorum defined in config, running  in standalone mode
2023-06-04 11:11:58,385 [myid:] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
2023-06-04 11:11:58,387 [myid:] - INFO  [main:QuorumPeerConfig@136] - Reading configuration from: /zookeeper/bin/../conf/zoo.cfg
2023-06-04 11:11:58,387 [myid:] - INFO  [main:ZooKeeperServerMain@98] - Starting server
2023-06-04 11:11:58,399 [myid:] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
2023-06-04 11:11:58,402 [myid:] - INFO  [main:Environment@100] - Server environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT
2023-06-04 11:11:58,402 [myid:] - INFO  [main:Environment@100] - Server environment:host.name=07fd37df3459
2023-06-04 11:11:58,403 [myid:] - INFO  [main:Environment@100] - Server environment:java.version=1.8.0_144
2023-06-04 11:11:58,403 [myid:] - INFO  [main:Environment@100] - Server environment:java.vendor=Oracle Corporation
2023-06-04 11:11:58,403 [myid:] - INFO  [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-8-oracle
2023-06-04 11:11:58,403 [myid:] - INFO  [main:Environment@100] - Server environment:java.class.path=/zookeeper/bin/../zookeeper-server/target/classes:/zookeeper/bin/../build/classes:/zookeeper/bin/../zookeeper-server/target/lib/*.jar:/zookeeper/bin/../build/lib/*.jar:/zookeeper/bin/../lib/slf4j-log4j12-1.7.25.jar:/zookeeper/bin/../lib/slf4j-api-1.7.25.jar:/zookeeper/bin/../lib/netty-3.10.6.Final.jar:/zookeeper/bin/../lib/log4j-1.2.17.jar:/zookeeper/bin/../lib/jline-0.9.94.jar:/zookeeper/bin/../lib/audience-annotations-0.5.0.jar:/zookeeper/bin/../zookeeper-3.4.14.jar:/zookeeper/bin/../zookeeper-server/src/main/resources/lib/*.jar:/zookeeper/bin/../conf:
2023-06-04 11:11:58,404 [myid:] - INFO  [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2023-06-04 11:11:58,404 [myid:] - INFO  [main:Environment@100] - Server environment:java.io.tmpdir=/tmp
2023-06-04 11:11:58,405 [myid:] - INFO  [main:Environment@100] - Server environment:java.compiler=<NA>
2023-06-04 11:11:58,405 [myid:] - INFO  [main:Environment@100] - Server environment:os.name=Linux
2023-06-04 11:11:58,406 [myid:] - INFO  [main:Environment@100] - Server environment:os.arch=amd64
2023-06-04 11:11:58,406 [myid:] - INFO  [main:Environment@100] - Server environment:os.version=5.15.0-72-generic
2023-06-04 11:11:58,406 [myid:] - INFO  [main:Environment@100] - Server environment:user.name=root
2023-06-04 11:11:58,406 [myid:] - INFO  [main:Environment@100] - Server environment:user.home=/root
2023-06-04 11:11:58,407 [myid:] - INFO  [main:Environment@100] - Server environment:user.dir=/zookeeper
2023-06-04 11:11:58,409 [myid:] - INFO  [main:ZooKeeperServer@836] - tickTime set to 2000
2023-06-04 11:11:58,409 [myid:] - INFO  [main:ZooKeeperServer@845] - minSessionTimeout set to -1
2023-06-04 11:11:58,409 [myid:] - INFO  [main:ZooKeeperServer@854] - maxSessionTimeout set to -1
2023-06-04 11:11:58,417 [myid:] - INFO  [main:ServerCnxnFactory@117] - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory
2023-06-04 11:11:58,422 [myid:] - INFO  [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181

能夠正常運行起來並監聽2181端口,說明zk鏡像構建沒有問題;通過上述測試,我們沒有指定環境變量,對應鏡像就直接以單機的方式運行;

5、在K8s環境中運行zookeeper集羣

5.1、在nfs上準備pv目錄

root@harbor:~# mkdir -pv /data/k8sdata/magedu/zookeeper-datadir-{1,2,3}
mkdir: created directory '/data/k8sdata/magedu/zookeeper-datadir-1'
mkdir: created directory '/data/k8sdata/magedu/zookeeper-datadir-2'
mkdir: created directory '/data/k8sdata/magedu/zookeeper-datadir-3'
root@harbor:~# cat /etc/exports 
# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#
/data/k8sdata/kuboard *(rw,no_root_squash)
/data/volumes *(rw,no_root_squash)
/pod-vol *(rw,no_root_squash)
/data/k8sdata/myserver *(rw,no_root_squash)
/data/k8sdata/mysite *(rw,no_root_squash)

/data/k8sdata/magedu/images *(rw,no_root_squash)
/data/k8sdata/magedu/static *(rw,no_root_squash)


/data/k8sdata/magedu/zookeeper-datadir-1 *(rw,no_root_squash)
/data/k8sdata/magedu/zookeeper-datadir-2 *(rw,no_root_squash)
/data/k8sdata/magedu/zookeeper-datadir-3 *(rw,no_root_squash)
root@harbor:~# exportfs -av
exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/kuboard".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/pod-vol".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/myserver".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [5]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/mysite".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [7]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/images".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [8]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/static".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [11]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-1".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [12]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-2".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [13]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-3".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exporting *:/data/k8sdata/magedu/zookeeper-datadir-3
exporting *:/data/k8sdata/magedu/zookeeper-datadir-2
exporting *:/data/k8sdata/magedu/zookeeper-datadir-1
exporting *:/data/k8sdata/magedu/static
exporting *:/data/k8sdata/magedu/images
exporting *:/data/k8sdata/mysite
exporting *:/data/k8sdata/myserver
exporting *:/pod-vol
exporting *:/data/volumes
exporting *:/data/k8sdata/kuboard
root@harbor:~# 

5.2、在k8s上創建pv

root@k8s-master01:~/k8s-data/yaml/magedu/zookeeper/pv# cat zookeeper-persistentvolume.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-1
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce 
  nfs:
    server: 192.168.0.42
    path: /data/k8sdata/magedu/zookeeper-datadir-1 

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-2
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.0.42
    path: /data/k8sdata/magedu/zookeeper-datadir-2 

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-3
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.0.42  
    path: /data/k8sdata/magedu/zookeeper-datadir-3 
root@k8s-master01:~/k8s-data/yaml/magedu/zookeeper/pv# 

5.3、在k8s上創建pvc

root@k8s-master01:~/k8s-data/yaml/magedu/zookeeper/pv# cat zookeeper-persistentvolumeclaim.yaml 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-1
  namespace: magedu
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-1
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-2
  namespace: magedu
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-2
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-3
  namespace: magedu
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-3
  resources:
    requests:
      storage: 10Gi
root@k8s-master01:~/k8s-data/yaml/magedu/zookeeper/pv# 

5.4、部署zk

root@k8s-master01:~/k8s-data/yaml/magedu/zookeeper# cat zookeeper.yaml
apiVersion: v1
kind: Service
metadata:
  name: zookeeper
  namespace: magedu
spec:
  ports:
    - name: client
      port: 2181
  selector:
    app: zookeeper
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper1
  namespace: magedu
spec:
  type: NodePort        
  ports:
    - name: client
      port: 2181
      nodePort: 32181
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "1"
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper2
  namespace: magedu
spec:
  type: NodePort        
  ports:
    - name: client
      port: 2181
      nodePort: 32182
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "2"
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper3
  namespace: magedu
spec:
  type: NodePort        
  ports:
    - name: client
      port: 2181
      nodePort: 32183
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "3"
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  name: zookeeper1
  namespace: magedu
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "1"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: harbor.ik8s.cc/magedu/zookeeper:v3.4.14 
          imagePullPolicy: Always
          env:
            - name: MYID
              value: "1"
            - name: SERVERS
              value: "zookeeper1,zookeeper2,zookeeper3"
            - name: JVMFLAGS
              value: "-Xmx2G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/zookeeper/data"
            name: zookeeper-datadir-pvc-1 
      volumes:
        - name: zookeeper-datadir-pvc-1 
          persistentVolumeClaim:
            claimName: zookeeper-datadir-pvc-1
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  name: zookeeper2
  namespace: magedu
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "2"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: harbor.ik8s.cc/magedu/zookeeper:v3.4.14 
          imagePullPolicy: Always
          env:
            - name: MYID
              value: "2"
            - name: SERVERS
              value: "zookeeper1,zookeeper2,zookeeper3"
            - name: JVMFLAGS
              value: "-Xmx2G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/zookeeper/data"
            name: zookeeper-datadir-pvc-2 
      volumes:
        - name: zookeeper-datadir-pvc-2
          persistentVolumeClaim:
            claimName: zookeeper-datadir-pvc-2
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  name: zookeeper3
  namespace: magedu
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "3"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: harbor.ik8s.cc/magedu/zookeeper:v3.4.14 
          imagePullPolicy: Always
          env:
            - name: MYID
              value: "3"
            - name: SERVERS
              value: "zookeeper1,zookeeper2,zookeeper3"
            - name: JVMFLAGS
              value: "-Xmx2G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/zookeeper/data"
            name: zookeeper-datadir-pvc-3
      volumes:
        - name: zookeeper-datadir-pvc-3
          persistentVolumeClaim:
           claimName: zookeeper-datadir-pvc-3
root@k8s-master01:~/k8s-data/yaml/magedu/zookeeper# 

上述部署清單,主要創建了4個svc和3個deploy控制器;其中一個svc是供客戶端使用,其餘三個svc主要用於內部集羣通信使用,svc通過標籤選擇器(app: zookeeper 和server-id: “”來進行關聯)和deploy控制器對應的pod進行一一關聯;即service-zk1關聯deploy-zk1,service-zk2關聯deploy-zk2,service-zk3關聯deploy-zk3;對於每個deploy控制下的zkpod通過掛載對應pvc來彼此隔離數據;即zk1掛載pvc1,zk2掛載pvc2,zk3掛載pvc3;

6、驗證zookeeper集羣狀態

6.1、進入任意一個pod內部,查看集羣角色和配置文件

可以看到集羣配置正常生成,集羣角色爲follower;

6.2、停掉harbor服務,刪除leader pod,看看對應zk集羣是否會重新選舉?

root@harbor:~# systemctl stop harbor
root@harbor:~# 

zk集羣在初始化時,所有節點數據都是沒有的,所以選舉leader就直接比較MYID,誰大誰就是leader;從上面的截圖可以看到,我們把leader刪除以後,對應zk集羣會重新選舉新的leader;

6.3、恢復harbor服務,再次刪除zk3看看zk3是否還會恢復leader身份?

root@harbor:~# systemctl start harbor
root@harbor:~# systemctl status harbor
● harbor.service - Harbor
     Loaded: loaded (/lib/systemd/system/harbor.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2023-06-04 12:02:00 UTC; 5s ago
       Docs: http://github.com/vmware/harbor
   Main PID: 186256 (docker-compose)
      Tasks: 9 (limit: 4571)
     Memory: 10.8M
        CPU: 305ms
     CGroup: /system.slice/harbor.service
             └─186256 /usr/local/bin/docker-compose -f /app/harbor/docker-compose.yml up

Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/controller/artifact/processor/processor.go:59]: the processor to>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/controller/artifact/processor/processor.go:59]: the processor to>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/controller/artifact/processor/processor.go:59]: the processor to>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/controller/artifact/processor/processor.go:59]: the processor to>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/controller/artifact/processor/processor.go:59]: the processor to>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/controller/artifact/processor/processor.go:59]: the processor to>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/controller/artifact/processor/processor.go:59]: the processor to>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023/06/04 12:02:06.283 [D]  init global config instance failed. If you do not use this, just >
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/pkg/reg/adapter/native/adapter.go:36]: the factory for adapter d>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/pkg/reg/adapter/aliacr/adapter.go:30]: the factory for adapter a>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/pkg/reg/adapter/awsecr/adapter.go:44]: the factory for adapter a>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/pkg/reg/adapter/azurecr/adapter.go:29]: Factory for adapter azur>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/pkg/reg/adapter/dockerhub/adapter.go:26]: Factory for adapter do>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/pkg/reg/adapter/dtr/adapter.go:22]: the factory of dtr adapter w>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/pkg/reg/adapter/githubcr/adapter.go:29]: the factory for adapter>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/pkg/reg/adapter/gitlab/adapter.go:18]: the factory for adapter g>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/pkg/reg/adapter/googlegcr/adapter.go:37]: the factory for adapte>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/pkg/reg/adapter/harbor/adaper.go:31]: the factory for adapter ha>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/pkg/reg/adapter/huawei/huawei_adapter.go:40]: the factory of Hua>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/pkg/reg/adapter/jfrog/adapter.go:42]: the factory of jfrog artif>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/pkg/reg/adapter/quay/adapter.go:53]: the factory of Quay adapter>
Jun 04 12:02:06 harbor.ik8s.cc docker-compose[186256]: harbor-jobservice  | 2023-06-04T12:02:06Z [INFO] [/pkg/reg/adapter/tencentcr/adapter.go:41]: the factory for adapte>
root@harbor:~#

可以看到zk3加入集羣以後,對應身份並不能直接變爲leader,這是因爲集羣已經有一個leader存在;

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章