k8s (十三) --- Kubernetes 存儲 之 動態 PV

一、StorageClass

簡介及屬性

StorageClass提供了一種描述存儲類(class)的方法,不同的class可能會映射到不同的服務質量等級和備份策略或其他策略等。

每個 StorageClass 都包含 provisioner、parameters 和 reclaimPolicy 字段, 這些字段會在StorageClass需要動態分配 PersistentVolume 時會使用到。

StorageClass的屬性

  • Provisioner(存儲分配器):用來決定使用哪個卷插件分配PV,該字段必須指定。可以指定內部分配器,也可以指定外部分配器。外部分配器的代碼地址爲:kubernetes-incubator/external-storage,其中包括NFS和Ceph等。
  • Reclaim Policy(回收策略):通過reclaimPolicy字段指定創建的Persistent Volume的回收策略,回收策略包括:Delete 或者 Retain,沒有指定默認爲Delete。
  • 更多屬性查看:https://kubernetes.io/zh/docs/concepts/storage/storage-classes/

NFS Client Provisioner

NFS Client Provisioner是一個automatic provisioner,使用NFS作爲存儲,自動創建PV和對應的PVC,本身不提供NFS存儲,需要外部先有一套NFS存儲服務。

  • PV以 namespace{namespace}-{pvcName}-${pvName}的命名格式提供(在NFS服務器上)
  • PV回收的時候以 archieved-namespace{namespace}-{pvcName}-${pvName} 的命名格式(在NFS服務器上)

nfs-client-provisioner源碼地址:https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client

二、NFS動態分配PV示例

實驗準備

首先需要保證nfs服務器的正常運行:

[root@server1 ~]# showmount -e
Export list for server1:
/nfs *

授權配置

接下來進行基於角色的認證授權的配置:

[root@server1 pv]# mkdir nfsclass
[root@server1 pv]# cd nfsclass/
[root@server1 nfsclass]# vim rbac.yaml 
[root@server1 nfsclass]# cat rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

部署NFS Client Provisioner

[root@server1 nfsclass]# vim deployment.yaml 
[root@server1 nfsclass]# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default				#指定爲默認的namespace
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: nfs-client-provisioner:latest			#鏡像名稱
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: westos.org/nfs			#分配器的名稱
            - name: NFS_SERVER
              value: 172.25.63.1			#NFS服務器地址
            - name: NFS_PATH
              value: /nfs					#nfs的輸出路徑
      volumes:
        - name: nfs-client-root				#nfs卷
          nfs:
            server: 172.25.63.1
            path: /nfs

其中需要的鏡像nfs-client-provisioner:latest,可以先拉取下來然後上傳到私有倉庫,這樣在部署的時候比較快。

創建 NFS SotageClass

[root@server1 nfsclass]# vim class.yaml 
[root@server1 nfsclass]# cat class.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage			#SotageClass的名稱
provisioner: westos.org/nfs			#分配器的名稱
parameters:
  archiveOnDelete: "false"

其中:archiveOnDelete: "false"表示在刪除時不會對數據進行打包,當設置爲true時表示刪除時會對數據進行打包。

運行部署文件

運行之前先將環境中的所有pv和pvc全部刪除。

[root@server1 nfsclass]# kubectl get pv
No resources found in default namespace.
[root@server1 nfsclass]# kubectl get pvc
No resources found in default namespace.

然後可以使用以下命令直接運行通目錄下的所有部署文件(rbac.yaml,deployment.yaml,class.yaml):

[root@server1 nfsclass]# kubectl apply -f .

查看狀態:

[root@server1 nfsclass]# kubectl get all
NAME                                          READY   STATUS    RESTARTS   AGE
pod/nfs-client-provisioner-6b66ddf664-zjvtv   1/1     Running   0          62s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   18d
service/myservice    ClusterIP   10.101.31.155   <none>        80/TCP    14d

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nfs-client-provisioner   1/1     1            1           63s

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/nfs-client-provisioner-6b66ddf664   1         1         1       63s

此時並不會創建pv,但是會爲我們創建一個sc(SotageClass):

[root@server1 nfsclass]# kubectl get pv
No resources found in default namespace.
[root@server1 nfsclass]# 
[root@server1 nfsclass]# kubectl get sc
NAME                  PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage   westos.org/nfs   Delete          Immediate           false                  76s

創建測試pvc

[root@server1 nfsclass]# vim pvc.yaml 
[root@server1 nfsclass]# cat pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Mi

創建:

[root@server1 nfsclass]# kubectl apply -f pvc.yaml 
persistentvolumeclaim/test-claim created

此時會爲我們創建一個pv:

[root@server1 nfsclass]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS          REASON   AGE
pvc-2a8ecd7a-a181-42dd-bae6-53ead083dcbc   100Mi      RWX            Delete           Bound    default/test-claim   managed-nfs-storage            7s

創建pvc後會在nfs服務器的共享目錄中生成以namespace+pvcname+pvname命名的文件夾:

[root@server1 nfsclass]# ls /nfs/
default-test-claim-pvc-2a8ecd7a-a181-42dd-bae6-53ead083dcbc

而當此時我們將pvc刪除後,pv也會隨之刪除:

[root@server1 nfsclass]# kubectl delete -f pvc.yaml 
persistentvolumeclaim "test-claim" deleted
[root@server1 nfsclass]# kubectl get pvc
No resources found in default namespace.
[root@server1 nfsclass]# 
[root@server1 nfsclass]# kubectl get pv
No resources found in default namespace.

接下來我們將刪除打包策略設置爲true:

[root@server1 nfsclass]# vim class.yaml 
[root@server1 nfsclass]# cat class.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: westos.org/nfs
parameters:
  archiveOnDelete: "true"

運行:

[root@server1 nfsclass]# kubectl delete -f class.yaml 
storageclass.storage.k8s.io "managed-nfs-storage" deleted
[root@server1 nfsclass]# kubectl apply -f class.yaml 
storageclass.storage.k8s.io/managed-nfs-storage created
[root@server1 nfsclass]# 
[root@server1 nfsclass]# kubectl get sc
NAME                  PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage   westos.org/nfs   Delete          Immediate           false                  15s

再創建兩個pvc:

[root@server1 nfsclass]# vim pvc.yaml 
[root@server1 nfsclass]# cat pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Mi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim-2
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi
[root@server1 nfsclass]# 
[root@server1 nfsclass]# kubectl apply -f pvc.yaml 

查看pv狀態:

[root@server1 nfsclass]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS          REASON   AGE
pvc-093922b7-048a-4c1f-97e9-d77afe0ec37b   200Mi      RWX            Delete           Bound    default/test-claim-2   managed-nfs-storage            12s
pvc-34761774-0a1f-4519-9d44-35c3736ce550   100Mi      RWX            Delete           Bound    default/test-claim     managed-nfs-storage            2m9s

刪除這些pv後再查看nfs共享目錄:

[root@server1 nfsclass]# kubectl delete -f pvc.yaml 
persistentvolumeclaim "test-claim" deleted
persistentvolumeclaim "test-claim-2" deleted
[root@server1 nfsclass]# kubectl get pv
No resources found in default namespace.

[root@server1 nfsclass]# ls /nfs/
archived-default-test-claim-2-pvc-093922b7-048a-4c1f-97e9-d77afe0ec37b
archived-default-test-claim-pvc-34761774-0a1f-4519-9d44-35c3736ce550

可以看出數據被打包成 archived+ 原來名字的形式。

創建測試pod

創建測試pod並修改pvc:

[root@server1 nfsclass]# vim pod.yaml
[root@server1 nfsclass]# cat pod.yaml 
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: nginx
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/usr/share/nginx/html"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
[root@server1 nfsclass]# 
[root@server1 nfsclass]# vim pvc.yaml 
[root@server1 nfsclass]# cat pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Mi
[root@server1 nfsclass]# 
[root@server1 nfsclass]# 
[root@server1 nfsclass]# kubectl apply -f pvc.yaml 
persistentvolumeclaim/test-claim created
[root@server1 nfsclass]# 
[root@server1 nfsclass]# kubectl apply -f pod.yaml 
pod/test-pod created

查看pod狀態:

[root@server1 nfsclass]# kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
nfs-client-provisioner-6b66ddf664-zjvtv   1/1     Running   0          15m   10.244.1.81   server2   <none>           <none>
test-pod                                  1/1     Running   0          33s   10.244.2.70   server3   <none>           <none>

此時訪問這個pod不能訪問到,提示403 Forbidden:
在這裏插入圖片描述
現在需要生成默認發佈頁面,可以直接在nfs共享目錄中直接寫入:

[root@server1 nfsclass]# ls /nfs/
default-test-claim-pvc-17f37558-e8da-4b92-9d11-3a0adab97d8b
[root@server1 nfsclass]# 
[root@server1 nfsclass]# echo redhat > /nfs/default-test-claim-pvc-17f37558-e8da-4b92-9d11-3a0adab97d8b/index.html

再進行訪問測試就可以訪問到頁面了:

[root@server1 nfsclass]# curl 10.244.2.70
redhat

查看pod掛載:

[root@server1 nfsclass]# kubectl describe pod test-pod 

在這裏插入圖片描述實驗後刪除:

[root@server1 nfsclass]# kubectl delete -f pod.yaml 
[root@server1 nfsclass]# kubectl delete -f pvc.yaml 

三、默認 StorageClass

默認的 StorageClass 將被用於動態的爲沒有特定 storage class 需求的 PersistentVolumeClaims 配置存儲:(只能有一個默認StorageClass)
如果沒有默認StorageClass,PVC 也沒有指定storageClassName 的值,那麼意味着它只能夠跟 storageClassName 也是“”的 PV 進行綁定。

如上例中的pvc若沒有指定分配器的名稱則會一直處於準備狀態:

[root@server1 nfsclass]# vim pvc.yaml 
[root@server1 nfsclass]# cat pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
#  annotations:
#    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Mi

[root@server1 nfsclass]# kubectl get sc
NAME                  PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage   westos.org/nfs   Delete          Immediate           false                  12m
[root@server1 nfsclass]# 
[root@server1 nfsclass]# kubectl apply -f pvc.yaml 
persistentvolumeclaim/test-claim created
[root@server1 nfsclass]# 
[root@server1 nfsclass]# kubectl get pvc
NAME         STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim   Pending                                                     10s
[root@server1 nfsclass]# kubectl get pvc
NAME         STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim   Pending                                                     12s

這種情況可以使用以下命令將之前創建的StorageClass設置爲默認:

[root@server1 nfsclass]# kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

更改後再查看sc發現已經變成默認的sc:
在這裏插入圖片描述之後再次運行pvc部署文件即可:

[root@server1 nfsclass]# kubectl delete -f pvc.yaml 
persistentvolumeclaim "test-claim" deleted
[root@server1 nfsclass]# kubectl apply -f pvc.yaml 
persistentvolumeclaim/test-claim created
[root@server1 nfsclass]# 
[root@server1 nfsclass]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
test-claim   Bound    pvc-95e11d69-4199-43e3-9084-72e3ad0c36a9   100Mi      RWX            managed-nfs-storage   9s

可以看出pvc狀態已經正常。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章