阿里云Kubernetes CSI实践 - TopologyAware云盘卷

Topology Aware

对于云盘这种存储卷类型,只能在相同zone的ecs上进行挂载,在多可用区的集群环境中,就需要对pv、pvc和pod进行一致性zone调度;

1. 传统使用方式:

通过PV、PVC所在zoneid信息,把pod调度到相应zone;

即在PV中定义相应的zoneid信息,当一个pod使用这个pv的时候,scheduler会根据pv的zoneid把pod调度到相应可用区;

具体配置:

apiVersion: v1
kind: PersistentVolume
metadata:
  labels:
    failure-domain.beta.kubernetes.io/region: cn-beijing
    failure-domain.beta.kubernetes.io/zone: cn-beijing-e
  name: d-2ze6wbcvihjeaw5bae0c
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 20Gi
  csi:
    driver: diskplugin.csi.alibabacloud.com
    volumeHandle: d-bp1fqy1enb2rtymx5g5y
  persistentVolumeReclaimPolicy: Delete
  storageClassName: alicloud-disk-ssd

通过在PV中添加下面label来完成调度:

  labels:
    failure-domain.beta.kubernetes.io/region: cn-beijing
    failure-domain.beta.kubernetes.io/zone: cn-beijing-e

2. 通过Topology Aware方式

上述方式需要先确定云盘的可用区,然后与pod进行关联调度。

而Topology Aware方式是云盘根据pod所在的zone进行创建。

使用下面storageclass,WaitForFirstConsumer配置会使pv controller在pod启动的时候去创建pv,并通过pod所在的zone信息来确定pv所在的zone。

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: alicloud-disk-aware
provisioner: diskplugin.csi.alibabacloud.com
volumeBindingMode: WaitForFirstConsumer
parameters:
  type: cloud_ssd

环境准备

集群创建、依赖配置、CSI插件部署等请参考:CSI部署详解

创建StorageClass

WaitForFirstConsumer:表示只有pod消费这个pvc的时候才驱动Provisioner进行pv创建;

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: alicloud-disk-aware
provisioner: diskplugin.csi.alibabacloud.com
volumeBindingMode: WaitForFirstConsumer
parameters:
  type: cloud_ssd

创建动态PVC PV

通过下面模板创建动态卷PV:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: disk-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 25Gi
  storageClassName: alicloud-disk-aware
# kubectl get pvc
NAME       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS          AGE
disk-pvc   Pending                                      alicloud-disk-aware   23m

创建应用

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-disk
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
        volumeMounts:
          - name: disk-pvc
            mountPath: "/data"
      volumes:
        - name: disk-pvc
          persistentVolumeClaim:
            claimName: disk-pvc

验证挂载、高可用

查看pvc已经绑定完成
# kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
disk-pvc   Bound    pvc-bf7732a6-a7cf-11e9-8dec-00163e0a6ecc   25Gi       RWO            alicloud-disk-aware   18s
# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS          REASON   AGE
pvc-bf7732a6-a7cf-11e9-8dec-00163e0a6ecc   25Gi       RWO            Delete           Bound    default/disk-pvc   alicloud-disk-aware            7s

查看pvc详情,volume.kubernetes.io/selected-node即为pv的调度信息:
# kubectl describe pvc disk-pvc
Name:          disk-pvc
Namespace:     default
StorageClass:  alicloud-disk-aware
Status:        Bound
Volume:        pvc-bf7732a6-a7cf-11e9-8dec-00163e0a6ecc
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: diskplugin.csi.alibabacloud.com
               volume.kubernetes.io/selected-node: cn-hangzhou.192.168.6.149
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      25Gi
Access Modes:  RWO
VolumeMode:    Filesystem

查看pod,验证云盘挂载成功,创建测试文件;
# kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
nginx-disk-6d5659d745-2ts26   1/1     Running   0          2m1s
# kubectl exec nginx-disk-6d5659d745-2ts26 ls /data
lost+found
# kubectl exec nginx-disk-6d5659d745-2ts26 touch /data/test
# kubectl exec nginx-disk-6d5659d745-2ts26 ls /data
lost+found
test

删除Pod,查看重建Pod是否数据稳定;
# kubectl delete pod nginx-disk-6d5659d745-2ts26
pod "nginx-disk-6d5659d745-2ts26" deleted

# kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
nginx-disk-6d5659d745-m5z9p   1/1     Running   0          11s
# kubectl exec nginx-disk-6d5659d745-m5z9p ls /data
lost+found
test
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章