使用helm安裝Stolon(Postgresql集羣)實例

爲了部署方便,我們這裏使用helm進行安裝

1. 鏡像推送至私人harbor倉庫,爲了後面安裝更加快速拉取鏡像

docker login https://reg01.sky-mobi.com #登陸harbor
docker pull sorintlab/stolon:v0.16.0-pg10 #拉取公共倉庫的到本地倉庫
docker  tag  sorintlab/stolon:v0.16.0-pg10 reg01.sky-mobi.com/stolon/stolon:v0.16.0-pg10 #打標籤
docker push reg01.sky-mobi.com/stolon/stolon:v0.16.0-pg10 #推送到自己的harbor倉庫

2. 使用helm安裝

helm fetch stable/stolon --untar   #下載至本地
kubectl create namespace yunwei-database #創建namespace
helm install postgresql stable/stolon -f values.yaml -n yunwei-database #這裏values.yaml配置文件需要自己修改,後面我會把我的放上來
helm list -n yunwei-database
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
postgresql      yunwei-database 2               2020-06-05 14:51:08.823385211 +0800 CST deployed        stolon-1.5.8    0.13.0   

如果創建有問題需要重建,需要先刪除
helm delete postgresql -n yunwei-database

3. 這裏我使用的是ceph,需要創建secret,ceph-admin-secret.yaml爲配置文件

cat ceph-admin-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-k8sadmin-secret
  namespace: yunwei-database
type: "kubernetes.io/rbd"
data:
  # ceph auth get-key client.admin | base64
  key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
  
kubectl apply -f ceph-admin-secret.yaml #創建secret

4. 部署完成,查看狀態

這裏由於我使用helm upgrade重新配置過,所以會有一個update的pod.第一次安裝沒有更新的情況下,沒有此pod.
#kubectl get pod -n yunwei-database
NAME                                          READY   STATUS      RESTARTS   AGE
postgresql-stolon-create-cluster-jwhg9        0/1     Completed   0          23h
postgresql-stolon-keeper-0                    1/1     Running     0          46m
postgresql-stolon-keeper-1                    1/1     Running     0          52m
postgresql-stolon-proxy-6c9dbcc8-hvqx7        1/1     Running     0          23h
postgresql-stolon-proxy-6c9dbcc8-v84cz        1/1     Running     0          23h
postgresql-stolon-sentinel-7d898946c4-bvtpz   1/1     Running     0          23h
postgresql-stolon-sentinel-7d898946c4-tlkl7   1/1     Running     0          23h
postgresql-stolon-update-cluster-spec-5sx25   0/1     Completed   0          52m

5. 我修改的values.yaml文件的幾處地方,大家可以根據自己需求修改,具體配置可以查看helm stolon中Configuration段落,postgresql參數見postgresql.conf

修改爲鏡像倉庫私人倉庫,也就是之前push上去的倉庫地址
image:
  repository: reg01.sky-mobi.com/stolon/stolon
  tag: v0.16.0-pg10

修改持久卷配置,這裏我使用ceph,通過stroageclass動態創建
kubectl get  storageclass  #查看storageclass名稱
NAME       PROVISIONER    AGE
ceph-k8s   ceph.com/rbd   80d
ceph-rbd   ceph.com/rbd   120d


persistence:
  enabled: true
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClassName: "ceph-k8s"
  accessModes:
    - ReadWriteOnce
  size: 200Gi

postgresql 參數設置,可以根據需求往裏添加,可查閱自己需要設置的參數
pgParameters:
  max_connections: "1000"
  shared_buffers: "8192MB"
  maintenance_work_mem: "2048MB"
  listen_addresses: "*"

keeper是statefulset,所以我這裏做了一些資源的限制
keeper:
  uid_prefix: "keeper"
  replicaCount: 2
  annotations: {}
  resources:
    requests:
      cpu: 8000m
      memory: 24000Mi
    limits:
      cpu: 16000m
      memory: 48000Mi

proxy是代理,是一個service,這裏通過clusterIP固定了service的IP,這樣做可以在proxy故障重啓的時候,避免業務修改IP
proxy:
  replicaCount: 2
  annotations: {}
  resources: {}
  priorityClassName: ""
  service:
    type: ClusterIP
#    loadBalancerIP: ""
    annotations: {}
    ports:
      proxy:
        port: 5432
        targetPort: 5432
        protocol: TCP
    clusterIP: 10.109.5.21

6. Postgresql 故障切換測試,找一臺有psql客戶端的機器,我這裏K8S內外網已經互通,所以數據庫可以直接用物理服務器直連K8S內部pod的地址。

postgresql-stolon-keeper-0     10.254.99.24    備節點
postgresql-stolon-keeper-1     10.254.99.40    主節點

直接連從節點
 psql -h 10.254.99.24 -p 5432 postgres -U postgres     

 postgres=# select pg_is_in_recovery();
 pg_is_in_recovery 
-------------------
 t
(1 row)

直接刪除statefulset ,然後刪除主節點keeper-1,這樣就只剩下備節點postgresql-stolon-keeper-0
kubectl delete statefulset postgresql-stolon-keeper --cascade=false -n yunwei-database 
kubectl delete pod postgresql-stolon-keeper-1 -n yunwei-database

如下,可見已經切換爲主節點
 psql -h 10.254.99.24 -p 5432 postgres -U postgres     

postgres=# select pg_is_in_recovery();
 pg_is_in_recovery 
-------------------
 f
(1 row)
注意:如果按以上操作,不會再自動啓動postgresql-stolon-keeper-1,只會發生failover,切換主節點到keeper-0.

7. 還有一種情況是不刪除statefulset,直接刪除keeper主節點

kubectl delete pod postgresql-stolon-keeper-1 -n yunwei-database
這種情況下,刪除後,會自動啓用一個keeper,保持我們之前設置的2個keeper.刪除後,發生failover,另外一個keeper切換爲主節點。

參考:
https://github.com/helm/charts/tree/master/stable/stolon
https://github.com/sorintlab/stolon/tree/master/examples/kubernetes

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章