k8s 以statefulset方式部署zookeeper集羣
參考 k8s官網zookeeper集羣的部署,數據掛着方式改成通過本地方式創建的pv; https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/
1、zookeeper鏡像
鏡像使用 k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10,這個鏡像製作的時候包含了一些zookeeper的啓動腳本,可以直接使用這個鏡像。
2、創建pv
我們部署的有三個節點,創建對應的pv,pv-zk.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-zk1
annotations:
volume.beta.kubernetes.io/storage-class: "anything" #對應的pv class 名
labels:
type: local
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/data/zookeeper" #掛載的本地目錄
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-zk2
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
labels:
type: local
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/data/zookeeper" #掛載的本地目錄
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-zk3
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
labels:
type: local
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/data/zookeeper"
persistentVolumeReclaimPolicy: Recycle
通過 kubectl create -f pv-zk.yaml
命令創建pv
使用kubectl get pv
可以查看創建好的三個pv
3、k8s服務啓動配置文件
k8s-zk.yaml
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
ports:
- port: 2181
name: client
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady #注意這裏使用OrderedReady順序啓動方式,Parallel方式可能會報連接錯誤的問題
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: kubernetes-zookeeper
imagePullPolicy: Always
image: "k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=512M \
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: "anything" #這個要和創建的pv對應
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
通過kubectl create -f k8s-zk.yaml
命令啓動pod;
通過kubectl get pods
可以查看啓動的pod;
4、檢查zookeeper集羣狀態
for i in 0 1 2; do kubectl exec zk-$i zkServer.sh status; done
可以看到一個leader,兩個follower