基於現有kubernetes集羣使用rook部署Ceph集羣

前言:
本篇文章主要是基於現有kubernetes集羣環境使用Rook部署Ceph存儲集羣
另外演示、塊存儲、文件共享存儲的使用,以及PVC快照的創建和恢復;
依據Rook官方文檔鏈接:https://rook.github.io/docs/rook/v1.6/
在部署之前需要對三個集羣節點增加一塊存儲設備(這裏我是在K8s-master03、k8s-node01、K8s-node02三個新增20G的裸盤)
K8s版本爲:1.21.0,Rook版本爲v1.6
集羣性能配置:內存不低於5G、CPU不低於2核
k8S所有節點時間要保持一致

一、拉取rook官方問原數據
#git clone --single-branch --branch v1.6.11 https://github.com/rook/rook.git
#cd rook/cluster/examples/kubernetes/ceph
創建RBAC相關權限、以及rook的crd相關的組件,主要用於管理控制ceph集羣 #kubectl create
-f crds.yaml -f common.yam
創建configmap以及deployment相關容器(默認會啓動discover、operator等相關pod)
[root@k8s-master01 rook]# vim cluster/examples/kubernetes/ceph/operator.yaml kind: ConfigMap apiVersion: v1 metadata: name: rook-ceph-operator-config # should be in the namespace of the operator namespace: rook-ceph # namespace:operator data: ROOK_CSI_ENABLE_CEPHFS: "true" # 是否開啓ROOK CSI RBD驅動的默認版本 ROOK_CSI_ENABLE_RBD: "true" ROOK_CSI_ENABLE_GRPC_METRICS: "false" # Set logging level for csi containers. #設置CSI日誌級別 # 支持從0-5,0表示一般有用日誌,5表示跟蹤級別的詳細日誌 CSI_LOG_LEVEL: "0" # 是否開啓Cephfs的快照功能 CSI_ENABLE_CEPHFS_SNAPSHOTTER: "true" # 是否開啓RBD快存儲的快照功能 CSI_ENABLE_RBD_SNAPSHOTTER: "true" #啓動cephFS 內核驅動程序 CSI_FORCE_CEPHFS_KERNEL_CLIENT: "true" CSI_RBD_FSGROUPPOLICY: "ReadWriteOnceWithFSType" CSI_CEPHFS_FSGROUPPOLICY: "ReadWriteOnceWithFSType" # (Optional) Allow starting unsupported ceph-csi image ROOK_CSI_ALLOW_UNSUPPORTED_VERSION: "false" #啓動ROOK支持的CSI默認鏡像版本(這裏的鏡像一般可以同步到自己內部的鏡像倉庫中,否則下載因爲國外鏡像原因可能導致無法正常下載) ROOK_CSI_REGISTRAR_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-node-driver-registrar:v2.0.1" ROOK_CSI_RESIZER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-resizer:v1.0.1" ROOK_CSI_PROVISIONER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-provisioner:v2.0.4" ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-snapshotter:v4.0.0" ROOK_CSI_ATTACHER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-attacher:v3.0.2" ROOK_OBC_WATCH_OPERATOR_NAMESPACE: "true" ROOK_ENABLE_FLEX_DRIVER: "false" #是否啓動發現守護程序來監控集羣中的存儲設備,自動添加磁盤到集羣中 ROOK_ENABLE_DISCOVERY_DAEMON: "true" # Enable volume replication controller CSI_ENABLE_VOLUME_REPLICATION: "false" --- # OLM: BEGIN OPERATOR DEPLOYMENT apiVersion: apps/v1 kind: Deployment metadata: name: rook-ceph-operator namespace: rook-ceph # namespace:operator labels: operator: rook storage-backend: ceph spec: selector: matchLabels: app: rook-ceph-operator replicas: 1 template: metadata: labels: app: rook-ceph-operator spec: serviceAccountName: rook-ceph-system containers: - name: rook-ceph-operator image: rook/ceph:v1.6.3 args: ["ceph", "operator"] volumeMounts: - mountPath: /var/lib/rook name: rook-config - mountPath: /etc/ceph name: default-config-dir env: - name: ROOK_CURRENT_NAMESPACE_ONLY value: "false" - name: ROOK_LOG_LEVEL value: "INFO" # The duration between discovering devices in the rook-discover daemonset. - name: ROOK_DISCOVER_DEVICES_INTERVAL value: "60m" - name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED value: "false" - name: ROOK_ENABLE_SELINUX_RELABELING value: "true" - name: ROOK_ENABLE_FSGROUP value: "true" # Disable automatic orchestration when new devices are discovered - name: ROOK_DISABLE_DEVICE_HOTPLUG value: "false" - name: DISCOVER_DAEMON_UDEV_BLACKLIST value: "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+" - name: ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS value: "5" # The name of the node to pass with the downward API - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName # The pod name to pass with the downward API - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name # The pod namespace to pass with the downward API - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace #hostNetwork: true volumes: - name: rook-config emptyDir: {} - name: default-config-dir emptyDir: {}
[root@k8s-master01 rook]# kubectl create -f  cluster/examples/kubernetes/ceph/operator.yaml

 二、創建CEPH集羣(Ps:待上面operator.yaml、crds.yaml、common.yaml三個文件都創建完畢之後再開始這一步操作)

[root@k8s-master01 rook]# vim  cluster/examples/kubernetes/ceph/cluster.yaml
apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph # namespace:cluster spec: cephVersion: image: ceph/ceph:v15.2.11 allowUnsupported: false #ceph集羣數據存放目錄 dataDirHostPath: /var/lib/rook #ceph集羣升級失敗,是否跳過升級 skipUpgradeChecks: true # Whether or not continue if PGs are not clean during an upgrade continueUpgradeAfterChecksEvenIfNotHealthy: false # 升級超過10分鐘測跳過 waitTimeoutForHealthyOSDInMinutes: 10 mon: #啓動mons的pod副本數量,一般都是奇數,建議爲3 count: 3 # The mons should be on unique nodes. For production, at least 3 nodes are recommended for this reason. # Mons should only be allowed on the same node for test environments where data loss is acceptable. #是否將所有的mons副本數部署在同一個節點 allowMultiplePerNode: false mgr: #啓動mgr副本數量,如果需要更高的mgr性能,就可以增加數量 count: 1 modules: # Several modules should not need to be included in this list. The "dashboard" and "monitoring" modules # are already enabled by other settings in the cluster CR. - name: pg_autoscaler enabled: true # enable the ceph dashboard for viewing cluster status dashboard: enabled: true ssl: false # enable prometheus alerting for cluster monitoring: # requires Prometheus to be pre-installed enabled: false #設置ceph集羣所屬的命名空間, rulesNamespace: rook-ceph network: crashCollector: disable: false cleanupPolicy: confirmation: "" # sanitizeDisks represents settings for sanitizing OSD disks on cluster deletion sanitizeDisks: method: quick dataSource: zero iteration: 1 allowUninstallWithVolumes: false annotations: labels: resources: removeOSDsIfOutAndSafeToRemove: false storage: # cluster level storage configuration and selection useAllNodes: false useAllDevices: false #deviceFilter: config: #配置OSD存儲節點,name不能配置IP,而應該是標籤kubernetes.io/hostname的內容 nodes: - name: "k8s-master03" devices: # specific devices to use for storage can be specified for each node - name: "vdb1" - name: "k8s-node01" devices: - name: "vdb1" - name: "k8s-node02" devices: - name: "vdb1" disruptionManagement: managePodBudgets: true osdMaintenanceTimeout: 30 pgHealthCheckTimeout: 0 manageMachineDisruptionBudgets: false # Namespace in which to watch for the MachineDisruptionBudgets. machineDisruptionBudgetNamespace: openshift-machine-api # healthChecks # Valid values for daemons are 'mon', 'osd', 'status' healthCheck: daemonHealth: mon: disabled: false interval: 45s osd: disabled: false interval: 60s status: disabled: false interval: 60s # Change pod liveness probe, it works for all mon,mgr,osd daemons livenessProbe: mon: disabled: false mgr: disabled: false osd: disabled: false
[root@k8s-master01 rook]# kubectl get -f cluster/examples/kubernetes/ceph/cluster.yaml
[root@k8s-master01 ~]# kubectl get cephcluster -n rook-ceph
NAME        DATADIRHOSTPATH   MONCOUNT   AGE   PHASE         MESSAGE                 HEALTH        EXTERNAL
rook-ceph   /var/lib/rook     3          95m   Progressing   Configuring Ceph OSDs   HEALTH_WARN
 

 三、安裝Ceph snapshot控制器

主要完成控制PVC快照功能,這裏需要切換一下分支

[root@k8s-master01 ~]# cd /root/k8s-ha-install

[root@k8s-master01 k8s-ha-install]# git remote -v
origin https://github.com/dotbalo/k8s-ha-install.git (fetch)
origin https://github.com/dotbalo/k8s-ha-install.git (push)

[root@k8s-master01 k8s-ha-install]# git checkout manual-installation-v1.21.x

[root@k8s-master01 k8s-ha-install]# git branch
* manual-installation-v1.21.x
master

[root@k8s-master01 k8s-ha-install]# kubectl create -f snapshotter/ -n kube-system

[root@k8s-master01 k8s-ha-install]# kubectl get pod -n kube-system -l app=snapshot-controller

參考鏈接:https://rook.io/docs/rook/v1.6/ceph-csi-snapshot.html 

 四、部署Ceph客戶端工具

Ceph集羣部署之後就可以使用相應的工具對集羣進行查看和操作,這些工具可以使用yum安裝,也可以託管與K8s上面

[root@k8s-master01 rook]# cd cluster/examples/kubernetes/
[root@k8s-master01 kubernetes]# cd ceph/
[root@k8s-master01 ceph]# ls
ceph-client.yaml                  create-external-cluster-resources.py  filesystem.yaml                   object-multisite.yaml    pool.yaml
cluster-external-management.yaml  create-external-cluster-resources.sh  flex                              object-openshift.yaml    pre-k8s-1.16
cluster-external.yaml             csi                                   import-external-cluster.sh        object-test.yaml         rbdmirror.yaml
cluster-on-pvc.yaml               dashboard-external-https.yaml         monitoring                        object-user.yaml         rgw-external.yaml
cluster-stretched.yaml            dashboard-external-http.yaml          nfs-test.yaml                     object.yaml              scc.yaml
cluster-test.yaml                 dashboard-ingress-https.yaml          nfs.yaml                          operator-openshift.yaml  storageclass-bucket-delete.yaml
cluster.yaml                      dashboard-loadbalancer.yaml           object-bucket-claim-delete.yaml   operator.yaml            storageclass-bucket-retain.yaml
common-external.yaml              direct-mount.yaml                     object-bucket-claim-retain.yaml   operator.yaml-back       test-data
common-second-cluster.yaml        filesystem-ec.yaml                    object-ec.yaml                    osd-purge.yaml           toolbox-job.yaml
common.yaml                       filesystem-mirror.yaml                object-external.yaml              pool-ec.yaml             toolbox.yaml
crds.yaml                         filesystem-test.yaml                  object-multisite-pull-realm.yaml  pool-test.yaml
創建toolbox pod客戶端
[root@k8s
-master01 ceph]# kubectl create -f toolbox.yaml -n rook-ceph deployment.apps/rook-ceph-tools created
檢查tools容器是否正常啓動

[root@k8s-master01 ceph]# kubectl get pod -n rook-ceph -l app=rook-ceph-tools
NAME READY STATUS RESTARTS AGE
rook-ceph-tools-fc5f9586c-mql8d 1/1 Running 0 66s

進入tools容器

[root@k8s-master01 ceph]# kubectl -n rook-ceph exec -it deployment/rook-ceph-tools -- bash

查看ceph集羣狀態是否正常
[root@rook-ceph-tools-fc5f9586c-mql8d /]# ceph status
cluster:
id: d6b21555-bfc5-4aa8-a5bc-2e9ab0cfc3ec
health: HEALTH_WARN  #警告信息可以暫且忽略,後面可以去掉,這個信息提示允許一個不安全的端口對mons進行一個操作
mons are allowing insecure global_id reclaim
Reduced data availability: 1 pg inactive
OSD count 0 < osd_pool_default_size 3

services:
mon: 3 daemons, quorum a,b,c (age 2h)
mgr: a(active, since 2h)
osd: 3 osds: 3 up (since 2m), 3 in (since 2m)

data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 57 GiB / 60 GiB avail
pgs: 1 active+clean

查看ceph 集羣osd磁盤狀態

[root@rook-ceph-tools-fc5f9586c-f6wgr /]# ceph osd status
ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE
0 k8s-master03 1030M 18.9G 0 0 0 0 exists,up
1 k8s-node01 1030M 18.9G 0 0 0 0 exists,up
2 k8s-node02 1030M 18.9G 0 0 0 0 exists,up


[root@rook-ceph-tools-fc5f9586c-f6wgr /]# ceph df

 

 五、配置Ceph Dashboard

Ps: 默認情況下,Ceph已經存在service,默認是CLuster IP類型,無法在K8s節點之外的主機訪問,雖然可以修改成NodePort不建議直接對其修改成NodePort,因此需要新建一個service,並改成Nodeport類型,對外開放

[root@k8s-master01 ceph]# vim dashboard-ceph.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: rook-ceph-mgr
    ceph_daemon_id: a
    rook_cluster: rook-ceph
  name: rook-ceph-mgr-dashboard-np
  namespace: rook-ceph
spec:
  ports:
  - name: http-dashboard
    port: 7000
    protocol: TCP
    targetPort: 7000
  selector:
    app: rook-ceph-mgr
    ceph_daemon_id: a
    rook_cluster: rook-ceph
  sessionAffinity: None
  type: NodePort
#
[root@k8s-master01 ceph]# kubectl create -f dashboard-ceph.yaml

 查看rook-ceph命名空間下的secret的密碼信息,並通過json結合based64進行解碼解密獲取登錄CEPH平臺的密碼

[root@k8s-master01 ceph]# kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
QW=@c}Xn2Ol4.14B}4Tu

 六、登錄驗證

這裏直接指定節點IP+暴露的端口即可訪問該ceph

PS:登錄之後會狀態這裏會有警告信息,按照ceph官方文檔上可處理掉

 解決當前ceph界面警告信息

[root@k8s-master01 ceph]# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
[root@rook-ceph-tools-fc5f9586c-f6wgr /]# ceph config set mon auth_allow_insecure_global_id_reclaim false

七、Ceph塊存儲的使用

塊存儲一般用於一個Pod掛載一塊存儲使用,主要針對於一個應用程序使用

參考官方文檔:https://rook.io/docs/rook/v1.6/ceph-block.html 

7.1、查看CSI驅動控制器類型

[root@k8s-master01 ceph]# kubectl get csidriver
NAME                            ATTACHREQUIRED   PODINFOONMOUNT   STORAGECAPACITY   TOKENREQUESTS   REQUIRESREPUBLISH   MODES        AGE
rook-ceph.cephfs.csi.ceph.com   true             false            false             <unset>         false               Persistent   8h
rook-ceph.rbd.csi.ceph.com      true             false            false             <unset>         false               Persistent   8h

7.2、創建Storageclass和ceph的存儲池

#vim csi/rbd/storageclass.yaml
apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: failureDomain: host replicated: size: 3 requireSafeReplicaSize: true --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-block provisioner: rook-ceph.rbd.csi.ceph.com parameters: pool: replicapool imageFormat: "2" imageFeatures: layering csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/fstype: ext4 allowVolumeExpansion: true reclaimPolicy: Delete
[root@k8s-master01 ceph]# kubectl create -f csi/rbd/storageclass.yaml
cephblockpool.ceph.rook.io/replicapool created
storageclass.storage.k8s.io/rook-ceph-block created

 

 

 查看塊存儲類型

 此時可以在Ceph平臺上看到塊存儲池

 

 

 7.3、掛載測試

7.3.1、首先創建PVC存儲卷

# vim /opt/pvc-mysql.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql
-pv-claim #PVC名稱 labels: app: wordpress spec: storageClassName: rook-ceph-block #指定storageclass 塊存儲名稱 accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
[root@k8s-master01 kubernetes]# kubectl create -f /opt/pvc/mysql-pvc.yaml
persistentvolumeclaim/mysql-pv-claim created

7.3.2、創建名爲wordpress-mysql的Service、pod的資源

[root@k8s-master01 kubernetes]# vim mysql.yaml
apiVersion: v1
kind: Service
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  ports:
    - port: 3306
  selector:
    app: wordpress
    tier: mysql
  clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
    tier: mysql
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
        - image: mysql:5.6
          name: mysql
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: changeme
          ports:
            - containerPort: 3306
              name: mysql
          volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql
      volumes:
        - name: mysql-persistent-storage #指定mysql掛載數據卷volume名稱
          persistentVolumeClaim:
            claimName: mysql-pv-claim #指定PVC名稱
[root@k8s-master01 kubernetes]# kubectl get -f mysql.yaml

此時mysql這個pod已經掛載上了ceph塊存儲

拓展案例

如果是StatefulSet對象資源的話,可以通過"volumeClaimTemplates"參數直接指定storageclass名稱,不用特意創建PVC,默認會爲各個實例自動創建一個PVC分別掛載

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "rook-ceph-block" #直接指定storageclass 塊存儲名稱
      resources:
        requests:
          storage: 1Gi
八、共享文件型存儲
共享文件存儲一般用於多個POD共享一個存儲
官方文檔:https://rook.io/docs/rook/v1.6/ceph-filesystem.html
CephFilesystem通過CRD中的元數據池、數據池和元數據服務器指定所需設置來創建文件系統,在此示例中,創建了具有三個複製的元數據池和一個具有三個複製的單個數據池,官方詳解:https://rook.io/docs/rook/v1.6/ceph-filesystem-crd.html
8.1、創建共享類型的文件系統

[root@k8s-master01 ceph]# pwd
/root/rook/rook/cluster/examples/kubernetes/ceph
[root@k8s-master01 ceph]# kubectl create -f filesystem.yam

[root@k8s-master01 ceph]# egrep -v "#|^$" filesystem.yaml
apiVersion: ceph.rook.io/v1 kind: CephFilesystem metadata: name: myfs spec: metadataPool: #用於創建文件系統元數據池的設置,必須使用複製 replicated: size: 3 requireSafeReplicaSize: true parameters: compression_mode: none dataPools:  #創建文件系統數據池的設置,如果指定了多個池,rook會將這些池添加到文件系統中,將用戶或者文件分配到池中。 - failureDomain: host replicated: size: 3 requireSafeReplicaSize: true parameters: compression_mode: none preserveFilesystemOnDelete: true #如果爲“true”,則刪除CephFilesystem資源時文件系統將保留,這是一種安全措施,可避免意外刪除數據 metadataServer: activeCount: 1 activeStandby: true placement: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - rook-ceph-mds topologyKey: kubernetes.io/hostname preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - rook-ceph-mds topologyKey: topology.kubernetes.io/zone annotations: labels: resources: mirroring: enabled: false
[root@k8s-master01 ceph]# kubectl create -f filesystem.yaml
[root@k8s-master01 ceph]# kubectl get pod  -n rook-ceph | grep mds
rook-ceph-mds-myfs-a-667cfddcd4-hntmk                    1/1     Running     0          4m3s
rook-ceph-mds-myfs-b-79d97d8686-l48s8                    1/1     Running     0          4m2s
下圖可以看出CEPH文件共享存儲池已經創建完成,myfs-data0主要用於存儲數據,myfs-metadata主要用於存儲元數據信息的。與塊存儲池不同

 8.2、創建共享類文件系統的StorageClass存儲

[root@k8s-master01 kubernetes]# vim   ceph/csi/cephfs/storageclass.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-cephfs provisioner: rook-ceph.cephfs.csi.ceph.com # driver:namespace:operator parameters: # clusterID is the namespace where operator is deployed. clusterID: rook-ceph # namespace:cluster # CephFS filesystem name into which the volume shall be created fsName: myfs # Ceph pool into which the volume shall be created # Required for provisionVolume: "true" pool: myfs-data0 # The secrets contain Ceph admin credentials. These are generated automatically by the operator # in the same namespace as the cluster. csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster # (optional) The driver can use either ceph-fuse (fuse) or ceph kernel client (kernel) # If omitted, default volume mounter will be used - this is determined by probing for ceph-fuse # or by setting the default mounter explicitly via --volumemounter command-line argument. # mounter: kernel reclaimPolicy: Delete allowVolumeExpansion: true mountOptions: # uncomment the following line for debugging #- debug
[root@k8s-master01 kubernetes]# kubectl create -f ceph/csi/cephfs/storageclass.yaml
storageclass.storage.k8s.io/rook-cephfs created

 

 8.3、創建PVC存儲卷

[root@k8s-master01 kubernetes]# vim cephfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pvc
  namespace: kube-system
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: rook-cephfs #指定共享文件存儲類型名稱
[root@k8s-master01 kubernetes]# kubectl create -f cephfs-pvc.yaml

8.4、掛載測試1

創建registry實例資源,並將其掛在到上述文件共享存儲PVC


[root@k8s-master01 kubernetes]# vim  kube-registry.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-registry
  namespace: kube-system
  labels:
    k8s-app: kube-registry
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 3
  selector:
    matchLabels:
      k8s-app: kube-registry
  template:
    metadata:
      labels:
        k8s-app: kube-registry
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
        - name: registry
          image: registry:2
          imagePullPolicy: Always
          resources:
            limits:
              cpu: 100m
              memory: 100Mi
          env:
            # Configuration reference: https://docs.docker.com/registry/configuration/
            - name: REGISTRY_HTTP_ADDR
              value: :5000
            - name: REGISTRY_HTTP_SECRET
              value: "Ple4seCh4ngeThisN0tAVerySecretV4lue"
            - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
              value: /var/lib/registry
          volumeMounts:
            - name: image-store
              mountPath: /var/lib/registry
          ports:
            - containerPort: 5000
              name: registry
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /
              port: registry
          readinessProbe:
            httpGet:
              path: /
              port: registry
      volumes:
        - name: image-store
          persistentVolumeClaim:
            claimName: cephfs-pvc
            readOnly: false
[root@k8s-master01 kubernetes]# kubectl create -f  kube-registry.yaml
deployment.apps/kube-registry created

[root@k8s-master01 kubernetes]# kubectl get pod -n kube-system | grep registry
kube-registry-5d6d8877f7-5r29t 1/1 Running 0 6m30s
kube-registry-5d6d8877f7-fbgmv 1/1 Running 0 6m30s
kube-registry-5d6d8877f7-wpz89 1/1 Running 0 6m30s

 8.5、驗證所創建的pod是否共享數據

此時多個kube-registry pod所掛在的目錄都是都是同一塊文件存儲,因此假設在第一個pod掛載目錄創建測試文件,那麼另一個pod同樣會有,如下圖所示

 在任意一個pod創建1.txt文件,隨後驗證其他pod是否共享

 8.5、掛載測試2

創建CEPH共享文件存儲PVC

#vim nginx-pvc.yaml
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nginx-share-pvc spec: storageClassName: rook-cephfs accessModes: - ReadWriteMany resources: requests: storage: 1Gi

 創建Nginx示例,並將其掛載PVC上

#vim nginx.yaml
apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web selector: app: nginx type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: web spec: selector: matchLabels: app: nginx # has to match .spec.template.metadata.labels replicas: 3 # by default is 1 template: metadata: labels: app: nginx # has to match .spec.selector.matchLabels spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumes: - name: www persistentVolumeClaim: claimName: nginx-share-pvc
[root@k8s-master01 kubernetes]# kubectl create -f nginx.yaml

 

 九、CephFS快照

這裏演示一下創建文件共享存儲快照

9.1、創建Volumesnapshotclass,創建CephFS快照控制器(CephFS快照存儲類型),並指定rook-ceph命名空間

[root@k8s-master01 kubernetes]# vim ceph/csi/cephfs/snapshotclass.yaml

---
# 1.17 <= K8s <= v1.19
# apiVersion: snapshot.storage.k8s.io/v1beta1
# K8s >= v1.20
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: csi-cephfsplugin-snapclass  #定義CephFS驅動控制器的名稱
driver: rook-ceph.cephfs.csi.ceph.com # 指定CSI驅動程序,在這裏定義CephFS驅動(可通過kubectl get csidriver查看當前所支持的CSI驅動)
parameters:
  clusterID: rook-ceph # 定義命名空間rook-ceph
  csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph # 指定控制器secret命名空間
deletionPolicy: Delete #定義集羣刪除策略

[root@k8s-master01 kubernetes]# kubectl create -f  ceph/csi/cephfs/snapshotclass.yaml

 創建完成之後驗證CephFS控制器是否創建成功

通過kubectl get volumesnapshotclass可以查看當前的存儲類型(包括剛剛創建的快照控制器)

9.2、創建卷快照

 [root@k8s-master01 kubernetes]# vim ceph/csi/cephfs/snapshot.yaml

---
# 1.17 <= K8s <= v1.19
# apiVersion: snapshot.storage.k8s.io/v1beta1
# K8s >= v1.20
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: cephfs-pvc-snapshot  #定義PVC快照名稱
spec:
  volumeSnapshotClassName: csi-cephfsplugin-snapclass
  source:
    persistentVolumeClaimName: nginx-share-pvc #指定PVC數據源,一般都是指定需要被創建快照PVC的名稱(可通過該命名kubectl get pvc查看)

[root@k8s-master01 kubernetes]# kubectl create -f  ceph/csi/cephfs/snapshot.yaml

驗證基於CephFS PVC所創建的快照是否完成 

 屬性解析:

NAME:快照名稱

READYTOUSE:快照狀態,true表示可用

SOURCEPVC:源PVC(也就是被創建快照的PVC)

SOURCESNAPSHOTCONTENT:源快照內容

RESTORESIZE:快照恢復大小

SNAPSHOTCLASS: 快照控制器存儲類型名稱

SNAPSHOTCONTENT:快照內容

CREATIONTIME:創建時間

 此時可以在CEPH平臺上查看快照信息

 如何確認快照和volumeshot綁定的關係呢? 

快照創建順序:storageclass----->PVC---->volumesnapshotclass(快照控制器存儲類型)---->volumesnapshot(卷快照)---->volumesnapshotcontent(快照數據內容)---->snapshotHandle(快照句柄)

首先創建完快照之後會有一個資源對象(volumesnapshotcontent),這裏面存儲PVC快照的數據,通過指定以下命名可以查看快照句柄

#kubectl get storageclass rook-cephfs
NAME          PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-cephfs   rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   4d19h
#kubectl get pvc nginx-share-pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginx-share-pvc   Bound    pvc-d2f89295-6304-4ee9-b8f3-f70a6c9225ef   2Gi        RWX            rook-cephfs    4d5h
#kubectl get volumesnapshotclass
NAME                         DRIVER                          DELETIONPOLICY   AGE
csi-cephfsplugin-snapclass   rook-ceph.cephfs.csi.ceph.com   Delete           4d5h
rook-ceph-block              rook-ceph.rbd.csi.ceph.com      Delete           4d21h
rook-cephfs                  rook-ceph.cephfs.csi.ceph.com   Delete           4d19h
#kubectl get volumesnapshotclass csi-cephfsplugin-snapclass
NAME                         DRIVER                          DELETIONPOLICY   AGE
csi-cephfsplugin-snapclass   rook-ceph.cephfs.csi.ceph.com   Delete           4d5h

#kubectl get volumesnapshot
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
cephfs-pvc-snapshot true nginx-share-pvc 2Gi csi-cephfsplugin-snapclass snapcontent-68f6da76-e63d-4ae1-a467-4d4ab961d1a6 4d4h 4d4h

#kubectl get volumesnapshotcontent snapcontent-68f6da76-e63d-4ae1-a467-4d4ab961d1a6 -o jsonpath="{['status'] ['snapshotHandle']}" && echo

下圖可以發現快照句柄和CEPH snapshots是對應的

  

9.3、PVC快照數據回滾

指定手動刪除pod副本,模擬數據丟失場景

[root@k8s-master01 kubernetes]# kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
busybox                            1/1     Running   1667       70d
web-0                              1/1     Running   0          15h
web-1                              1/1     Running   0          15h
web-2                              1/1     Running   0          15h
web-7bf54cbc8d-qn6ql               1/1     Running   0          70m
web-7bf54cbc8d-spnsz               1/1     Running   0          70m
web-7bf54cbc8d-t5656               1/1     Running   0          70m
wordpress-7b989dbf57-kvzg4         1/1     Running   0          15h
wordpress-mysql-6965fc8cc8-k6tjj   1/1     Running   0          16h
[root@k8s-master01 kubernetes]# kubectl exec -it web-7bf54cbc8d-qn6ql -- ls /usr/share/nginx/html
index.html
[root@k8s-master01 kubernetes]# kubectl exec -it web-7bf54cbc8d-qn6ql -- cat /usr/share/nginx/html/index.html
The is ok
[root@k8s-master01 kubernetes]# kubectl exec -it web-7bf54cbc8d-qn6ql -- rm -f  /usr/share/nginx/html/index.html
[root@k8s-master01 kubernetes]# kubectl exec -it web-7bf54cbc8d-qn6ql -- ls /usr/share/nginx/html/
數據回滾
其實數據回滾就是創建一個新的PVC,並指定已經創建完PVC快照的volumensnapshot,然後將POd掛載到新PVC上

 指定快照創建PVC


[root@k8s-master01 cephfs]# vim  pvc-restore.yaml
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-restore spec: storageClassName: rook-cephfs dataSource: name: cephfs-pvc-snapshot #指定快照名稱 kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteMany resources: requests: storage: 2Gi
[root@k8s-master01 cephfs]# kubectl create -f   pvc-restore.yaml
創建pod副本,並指定新回滾PVC 
[root@k8s-master01 cephfs]# vim  pod.yaml
--- apiVersion: v1 kind: Pod metadata: name: csicephfs-demo-pod spec: containers: - name: web-server image: nginx imagePullPolicy: IfNotPresent volumeMounts: - name: mypvc mountPath: /var/lib/www/html volumes: - name: mypvc persistentVolumeClaim: claimName: cephfs-pvc-restore readOnly: false
[root@k8s-master01 cephfs]# kubectl create -f pod.yaml

 

 可以看出新POD目錄已經存在快照數據,此時可以通過拷貝到之前刪除的pod副本中,如下所示

[root@k8s-master01 cephfs]# kubectl exec csicephfs-demo-pod -- tar cPf - /var/lib/www/html/index.html | sudo tar xf - -C  .
[root@k8s-master01 cephfs]# cat  var/lib/www/html/index.html
The is ok
[root@k8s-master01 cephfs]# kubectl cp var/lib/www/html/index.html  web-7bf54cbc8d-qn6ql:/usr/share/nginx/html/

[root@k8s-master01 cephfs]# kubectl exec -it web-7bf54cbc8d-qn6ql -- cat /usr/share/nginx/html/index.html
The is ok
[root@k8s-master01 cephfs]# kubectl exec -it web-7bf54cbc8d-t5656 -- cat /usr/share/nginx/html/index.html
The is ok

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章