Kubernetes實錄系列記錄文檔完整目錄參考: Kubernetes實錄-目錄
相關記錄鏈接地址 :
一、GlusterFS+Heketi環境
GlusterFS+Heketi環境可以採用容器化部署集成到Kubernetes,小規模環境或者計算與存儲融合架構可以採用這種方式(要求kubernetes節點有磁盤或者其他類型的裸設備,例如FC)。
本記錄GlusterFS+Heketi環境採用獨立部署的模式。具體部署參考:GlusterFS操作記錄(5) GlusterFS+Heketi配置(獨立部署)
主機名稱 | IP地址 | 角色 | 備註 |
---|---|---|---|
gluster-server-1 | 10.99.7.11 | glusterfs,heketi | heketi服務節點,啓用認證 account=admin,key=admin_secret |
gluster-server-2 | 10.99.7.12 | glusterfs | |
gluster-server-3 | 10.99.7.13 | glusterfs |
Heketi服務地址: 10.99.7.11:8080
實際產線中最好配置爲域名方式訪問。
二、Kubernetes使用glusterfs+Heketi提供持久存儲PV(自動模式)
在kubernetes上創建StorageClass來配置使用Dynamic Provisioning功能,由StorageClass連接Heketi,根據實際需要自動創建glusterfs存儲卷volume。
StorageClass由Kubernetes系統管理員創建,可以複用,多個PVC共用一個。
1.2 配置StorageClass
操作人員:系統管理員
範圍:StorageClass是全局的,也就是說在所有namespace下可見可用
# cat glusterfs-storageclass.yaml
apiVersion: v1
kind: Secret
type: kubernetes.io/glusterfs
metadata:
name: heketi-secret
namespace: kube-system
data:
# base64 encoded. key=admin_secret
key: YWRtaW5fc2VjcmV0
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs-storage-class-1
provisioner: kubernetes.io/glusterfs
allowVolumeExpansion: true
#reclaimPolicy 默認就是Delete(可以不指定使用默認),刪除pvc會自動刪除pv,heketi也自動清理vol
reclaimPolicy: Delete
parameters:
resturl: "http://10.99.7.11:8080"
clusterid: "77b24830f331f2e12ca064d7daab3e43"
volumetype: "replicate:3"
gidMax: "50000"
gidMin: "40000"
#heketi啓用認證情況下配置,如果沒有啓用認證可以不用配置
restauthenabled: "true"
restuser: "admin"
#restuserkey: "admin_secret"
#密碼最好保存在secret中,官方建議
secretNamespace: "kube-system"
secretName: "heketi-secret"
kubectl apply -f glusterfs-storageclass.yaml
secret/heketi-secret created
storageclass.storage.k8s.io/glusterfs-storage-class-1 created
kubectl get sc
NAME PROVISIONER AGE
glusterfs-storage-class-1 kubernetes.io/glusterfs 57s
kubectl get secret -n kube-system
NAME TYPE DATA AGE
...
heketi-secret kubernetes.io/glusterfs 1 57s
2. 配置PVC並部署應用使用glusterfs提供的共享存儲
2.1 pvc配置
操作人員:應用部署人員
# cat demo_mode3_pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfs-vol-pvc02
namespace: default
spec:
storageClassName: glusterfs-storage-class-1
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
備註這裏PVC使用的apiVersion是v1,storageclass在spec的storageClassName指定。
如果使用其他之前的apiVersion,可能需要在 metadata下 annotations 裏面 volume.beta.kubernetes.io/storage-class 指定。
kubectl apply -f demo_mode2_pvc.yaml
persistentvolumeclaim/glusterfs-vol-pvc02 created
# pvc不是全局資源,綁定在namespace上,這裏使用的是default
kubectl get pvc -n default
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
...
glusterfs-vol-pvc02 Bound pvc-30848f17-1f86-11e9-8a0e-1418776411a1 10Gi RWX glusterfs-storage-class-1 10s
# pv不區分namespace,爲全局資源
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
...
pvc-30848f17-1f86-11e9-8a0e-1418776411a1 10Gi RWX Delete Bound default/glusterfs-vol-pvc02 glusterfs-storage-class-1 4s
# 在heketi-cli服務器上查看(啓用了認證,信息現在alias了,參考部署文檔)
heketi-cli volume list
Id:d1b3fcfeb86fe7eaffdccd4eb26f2bd8 Cluster:77b24830f331f2e12ca064d7daab3e43 Name:vol_d1b3fcfeb86fe7eaffdccd4eb26f2bd8
heketi-cli volume info d1b3fcfeb86fe7eaffdccd4eb26f2bd8
Name: vol_d1b3fcfeb86fe7eaffdccd4eb26f2bd8
Size: 10
Volume Id: d1b3fcfeb86fe7eaffdccd4eb26f2bd8
Cluster Id: 77b24830f331f2e12ca064d7daab3e43
Mount: 10.99.7.12:vol_d1b3fcfeb86fe7eaffdccd4eb26f2bd8
Mount Options: backup-volfile-servers=10.99.7.11,10.99.7.13
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distributed+Replica: 3
Snapshot Factor: 1.00
以上,glusterfs的存儲卷volume已經處於可用狀態。
2.2 部署應用通過pvc使用glusterfs提供的存儲卷volume
# cat demo_mode3_nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-demo-mode3-nginx
labels:
name: demo-mode3-nginx
spec:
type: NodePort
ports:
- name: nginx
port: 80
nodePort: 31280
selector:
name: demo-mode3-nginx
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: demo-mode3-nginx
labels:
name: demo-mode3-nginx
spec:
replicas: 2
selector:
matchLabels:
name: demo-mode3-nginx
template:
metadata:
labels:
name: demo-mode3-nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: demo-mode3-nginx-vol
mountPath: "/usr/share/nginx/html"
volumes:
- name: demo-mode3-nginx-vol
persistentVolumeClaim:
claimName: glusterfs-vol-pvc02
kubectl apply -f demo_mode3_nginx.yaml
service/svc-demo-mode3-nginx created
deployment.apps/demo-mode3-nginx created
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
...
svc-demo-mode3-nginx NodePort 10.98.156.236 <none> 80:31280/TCP 4m50s
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
...
demo-mode3-nginx 2/2 2 2 5m49s
kubectl get rs
NAME DESIRED CURRENT READY AGE
...
demo-mode3-nginx-56c4fcdb4c 2 2 2 6m5s
kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-mode3-nginx-56c4fcdb4c-95v8r 1/1 Running 0 6m25s
demo-mode3-nginx-56c4fcdb4c-txbk4 1/1 Running 0 6m25s
2.4 驗證
# 1. 在共享存儲卷裏提供一個index.html文件
kubectl exec -it demo-mode3-nginx-56c4fcdb4c-95v8r /bin/bash
root@demo-mode3-nginx-56c4fcdb4c-95v8r:/# df -h
Filesystem Size Used Avail Use% Mounted on
...
10.99.7.11:vol_d1b3fcfeb86fe7eaffdccd4eb26f2bd8 10G 136M 9.9G 2% /usr/share/nginx/html
root@demo-mode3-nginx-56c4fcdb4c-95v8r:/# cd /usr/share/nginx/html
root@demo-mode3-nginx-56c4fcdb4c-95v8r:/usr/share/nginx/html#
cat <<EOF > index.html
<html>
<title>demo3</title>
<body>Hello, world</body>
</html>
EOF
root@demo-mode3-nginx-56c4fcdb4c-95v8r:/usr/share/nginx/html# ls -l
total 1
-rw-r--r-- 1 root 40000 62 Jan 24 03:25 index.html
root@demo-mode3-nginx-56c4fcdb4c-95v8r:/usr/share/nginx/html# exit
# 驗證nginx服務是否可以正常訪問到該index.html
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
...
svc-demo-mode3-nginx NodePort 10.98.156.236 <none> 80:31280/TCP 13m
curl 10.99.12.201:31280 # nodeIP:NodePort
curl 10.98.156.236 # svc clusterIP
curl 192.168.3.10 或者 192.168.4.10 # nginx pod IP
<html>
<title>demo3</title>
<body>Hello, world</body>
</html>
經過驗證信息訪問正常,glusterfs存儲卷可以正常提供服務了。