文章目錄
動態pv簡介
Kubernetes支持動態供給的存儲插件:
https://kubernetes.io/docs/concepts/storage/storage-classes/
Dynamic Provisioning機制工作的核心在於StorageClass的API對象。
StorageClass聲明存儲插件,用於自動創建PV。
StorageClass提供了一種描述存儲類(class)
的方法,不同的class可能會映射到不同的服務質量等級和備份策略或其他策略等。
每個 StorageClass 都包含 provisioner、parameters 和 reclaimPolicy
字段, 這些字段會在StorageClass需要動態分配 PersistentVolume 時會使用到。
StorageClass的屬性
Provisioner(存儲分配器):
用來決定使用哪個卷插件分配 PV,該字段必須指定。可以指定內部分配器,也可以指定外部分配器。外部分配器的代碼地址爲: kubernetes-incubator/external-storage,其中包括NFS和Ceph等。
Reclaim Policy(回收策略):
通過reclaimPolicy字段指定創建的Persistent Volume的回收策略,回收策略包括:Delete 或者 Retain,沒有指定默認爲Delete。
更多屬性查看:https://kubernetes.io/zh/docs/concepts/storage/storage-classes/
NFS Client Provisioner是一個automatic provisioner,使用NFS作爲存儲,自動創建PV和對應的PVC,本身不提供NFS存儲,需要外部先有一套NFS存儲服務。
PV以 ${namespace}-${pvcName}-${pvName}的命名格式提供(在NFS服務器上)
PV回收的時候以 archieved-${namespace}-${pvcName}-${pvName}
的命名格式(在NFS服務器上)
nfs-client-provisioner源碼地址:https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client
NFS動態分配PV示例
1.配置授權(基於rbac的角色控制):
$ vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
$ kubectl create -f rbac.yaml
2.部署NFS Client Provisioner
$ vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
3.創建 NFS SotageClass
$ vim class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: westos.org/nfs
archiveOnDelete: "false"
$ kubectl create -f class.yaml
$ kubectl get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage westos.org/nfs Delete Immediate false 70m
1.
2.
4.創建PVC:
$ vim test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs-pv1
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
$ kubectl create -f test-claim.yaml
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pv1 Bound pvc-394e57eb-1467-4857-84fd-509266effdbd 100Mi RWX managed-nfs-storage 6s
更改class中的參數
5.創建測試Pod:
$ vim test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: nfs-pv1
再添加一個pvc
默認的 StorageClass 將被用於動態的爲沒有特定 storage class 需求的 PersistentVolumeClaims 配置存儲:(只能有一個默認StorageClass)
如果沒有默認StorageClass,PVC 也沒有指定storageClassName 的值,那麼意味着它只能夠跟 storageClassName 也是“”的 PV 進行綁定。
$ kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
$ kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
global (default) westos.org/nfs Delete Immediate false 5s
managed-nfs-storage westos.org/nfs Delete Immediate false 3h7m
StatefulSet如何通過Headless Service維持Pod的拓撲狀態
1.創建Headless service
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
1.
2.創建statefulset控制器
StatefulSet控制器
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
3.
$ kubectl create -f statefulset.yaml
service/nginx-svc created
statefulset.apps/web created
$ kubectl get statefulsets.apps
NAME READY AGE
web 3/3 2m56s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d20h
nginx-svc ClusterIP None <none> 80/TCP 14s
$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
web-0 1/1 Running 0 7s 10.244.1.122 server2
web-1 1/1 Running 0 6s 10.244.2.113 server3
web-2 1/1 Running 0 5s 10.244.0.62 server1
StatefulSet將應用狀態抽象成了兩種情況:
拓撲狀態:應用實例必須按照某種順序啓動。新創建的Pod必須和原來Pod的網絡標識一樣
存儲狀態:應用的多個實例分別綁定了不同存儲數據。
StatefulSet給所有的Pod進行了編號,編號規則是:(序號),從0開始。
Pod被刪除後重建,重建Pod的網絡標識也不會改變,Pod的拓撲狀態按照Pod的“名字+編號”的方式固定下來,並且爲每個Pod提供了一個固定且唯一的訪問入口,即Pod對應的DNS記錄
$ dig -t A web-0.nginx-svc.default.svc.cluster.local @10.96.0.10
...
;; ANSWER SECTION:
web-0.nginx-svc.default.svc.cluster.local. 30 IN A 10.244.1.122
StatefulSet控制器
PV和PVC的設計,使得StatefulSet對存儲狀態的管理成爲了可能
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx-svc"
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
storageClassName: nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
od的創建也是嚴格按照編號順序進行的。比如在web-0進入到running狀態,並且Conditions爲Ready之前,web-1一直會處於pending狀態。
1.
StatefulSet還會爲每一個Pod分配並創建一個同樣編號的PVC。這樣,kubernetes就可以通過Persistent Volume機制爲這個PVC綁定對應的PV,從而保證每一個Pod都擁有一個獨立的Volumea
kubectl 彈縮
首先,想要彈縮的StatefulSet. 需先清楚是否能彈縮該應用.
$ kubectl get statefulsets <stateful-set-name>
改變StatefulSet副本數量:
$ kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
如果StatefulSet開始由 kubectl apply 或 kubectl create --save-config 創建,更新StatefulSet manifests中的 .spec.replicas, 然後執行命令 kubectl apply:
$ kubectl apply -f <stateful-set-file-updated>
也可以通過命令 kubectl edit 編輯該字段:
$ kubectl edit statefulsets <stateful-set-name>
使用 kubectl patch:
$ kubectl patch statefulsets <stateful-set-name> -p '{"spec":{"replicas":<new-replicas>}}'