基于现有kubernetes集群使用rook部署Ceph集群

前言:
本篇文章主要是基于现有kubernetes集群环境使用Rook部署Ceph存储集群
另外演示、块存储、文件共享存储的使用,以及PVC快照的创建和恢复;
依据Rook官方文档链接:https://rook.github.io/docs/rook/v1.6/
在部署之前需要对三个集群节点增加一块存储设备(这里我是在K8s-master03、k8s-node01、K8s-node02三个新增20G的裸盘)
K8s版本为:1.21.0,Rook版本为v1.6
集群性能配置:内存不低于5G、CPU不低于2核
k8S所有节点时间要保持一致

一、拉取rook官方问原数据
#git clone --single-branch --branch v1.6.11 https://github.com/rook/rook.git
#cd rook/cluster/examples/kubernetes/ceph
创建RBAC相关权限、以及rook的crd相关的组件,主要用于管理控制ceph集群 #kubectl create
-f crds.yaml -f common.yam
创建configmap以及deployment相关容器(默认会启动discover、operator等相关pod)
[root@k8s-master01 rook]# vim cluster/examples/kubernetes/ceph/operator.yaml kind: ConfigMap apiVersion: v1 metadata: name: rook-ceph-operator-config # should be in the namespace of the operator namespace: rook-ceph # namespace:operator data: ROOK_CSI_ENABLE_CEPHFS: "true" # 是否开启ROOK CSI RBD驱动的默认版本 ROOK_CSI_ENABLE_RBD: "true" ROOK_CSI_ENABLE_GRPC_METRICS: "false" # Set logging level for csi containers. #设置CSI日志级别 # 支持从0-5,0表示一般有用日志,5表示跟踪级别的详细日志 CSI_LOG_LEVEL: "0" # 是否开启Cephfs的快照功能 CSI_ENABLE_CEPHFS_SNAPSHOTTER: "true" # 是否开启RBD快存储的快照功能 CSI_ENABLE_RBD_SNAPSHOTTER: "true" #启动cephFS 内核驱动程序 CSI_FORCE_CEPHFS_KERNEL_CLIENT: "true" CSI_RBD_FSGROUPPOLICY: "ReadWriteOnceWithFSType" CSI_CEPHFS_FSGROUPPOLICY: "ReadWriteOnceWithFSType" # (Optional) Allow starting unsupported ceph-csi image ROOK_CSI_ALLOW_UNSUPPORTED_VERSION: "false" #启动ROOK支持的CSI默认镜像版本(这里的镜像一般可以同步到自己内部的镜像仓库中,否则下载因为国外镜像原因可能导致无法正常下载) ROOK_CSI_REGISTRAR_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-node-driver-registrar:v2.0.1" ROOK_CSI_RESIZER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-resizer:v1.0.1" ROOK_CSI_PROVISIONER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-provisioner:v2.0.4" ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-snapshotter:v4.0.0" ROOK_CSI_ATTACHER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-attacher:v3.0.2" ROOK_OBC_WATCH_OPERATOR_NAMESPACE: "true" ROOK_ENABLE_FLEX_DRIVER: "false" #是否启动发现守护程序来监控集群中的存储设备,自动添加磁盘到集群中 ROOK_ENABLE_DISCOVERY_DAEMON: "true" # Enable volume replication controller CSI_ENABLE_VOLUME_REPLICATION: "false" --- # OLM: BEGIN OPERATOR DEPLOYMENT apiVersion: apps/v1 kind: Deployment metadata: name: rook-ceph-operator namespace: rook-ceph # namespace:operator labels: operator: rook storage-backend: ceph spec: selector: matchLabels: app: rook-ceph-operator replicas: 1 template: metadata: labels: app: rook-ceph-operator spec: serviceAccountName: rook-ceph-system containers: - name: rook-ceph-operator image: rook/ceph:v1.6.3 args: ["ceph", "operator"] volumeMounts: - mountPath: /var/lib/rook name: rook-config - mountPath: /etc/ceph name: default-config-dir env: - name: ROOK_CURRENT_NAMESPACE_ONLY value: "false" - name: ROOK_LOG_LEVEL value: "INFO" # The duration between discovering devices in the rook-discover daemonset. - name: ROOK_DISCOVER_DEVICES_INTERVAL value: "60m" - name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED value: "false" - name: ROOK_ENABLE_SELINUX_RELABELING value: "true" - name: ROOK_ENABLE_FSGROUP value: "true" # Disable automatic orchestration when new devices are discovered - name: ROOK_DISABLE_DEVICE_HOTPLUG value: "false" - name: DISCOVER_DAEMON_UDEV_BLACKLIST value: "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+" - name: ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS value: "5" # The name of the node to pass with the downward API - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName # The pod name to pass with the downward API - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name # The pod namespace to pass with the downward API - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace #hostNetwork: true volumes: - name: rook-config emptyDir: {} - name: default-config-dir emptyDir: {}
[root@k8s-master01 rook]# kubectl create -f  cluster/examples/kubernetes/ceph/operator.yaml

 二、创建CEPH集群(Ps:待上面operator.yaml、crds.yaml、common.yaml三个文件都创建完毕之后再开始这一步操作)

[root@k8s-master01 rook]# vim  cluster/examples/kubernetes/ceph/cluster.yaml
apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph # namespace:cluster spec: cephVersion: image: ceph/ceph:v15.2.11 allowUnsupported: false #ceph集群数据存放目录 dataDirHostPath: /var/lib/rook #ceph集群升级失败,是否跳过升级 skipUpgradeChecks: true # Whether or not continue if PGs are not clean during an upgrade continueUpgradeAfterChecksEvenIfNotHealthy: false # 升级超过10分钟测跳过 waitTimeoutForHealthyOSDInMinutes: 10 mon: #启动mons的pod副本数量,一般都是奇数,建议为3 count: 3 # The mons should be on unique nodes. For production, at least 3 nodes are recommended for this reason. # Mons should only be allowed on the same node for test environments where data loss is acceptable. #是否将所有的mons副本数部署在同一个节点 allowMultiplePerNode: false mgr: #启动mgr副本数量,如果需要更高的mgr性能,就可以增加数量 count: 1 modules: # Several modules should not need to be included in this list. The "dashboard" and "monitoring" modules # are already enabled by other settings in the cluster CR. - name: pg_autoscaler enabled: true # enable the ceph dashboard for viewing cluster status dashboard: enabled: true ssl: false # enable prometheus alerting for cluster monitoring: # requires Prometheus to be pre-installed enabled: false #设置ceph集群所属的命名空间, rulesNamespace: rook-ceph network: crashCollector: disable: false cleanupPolicy: confirmation: "" # sanitizeDisks represents settings for sanitizing OSD disks on cluster deletion sanitizeDisks: method: quick dataSource: zero iteration: 1 allowUninstallWithVolumes: false annotations: labels: resources: removeOSDsIfOutAndSafeToRemove: false storage: # cluster level storage configuration and selection useAllNodes: false useAllDevices: false #deviceFilter: config: #配置OSD存储节点,name不能配置IP,而应该是标签kubernetes.io/hostname的内容 nodes: - name: "k8s-master03" devices: # specific devices to use for storage can be specified for each node - name: "vdb1" - name: "k8s-node01" devices: - name: "vdb1" - name: "k8s-node02" devices: - name: "vdb1" disruptionManagement: managePodBudgets: true osdMaintenanceTimeout: 30 pgHealthCheckTimeout: 0 manageMachineDisruptionBudgets: false # Namespace in which to watch for the MachineDisruptionBudgets. machineDisruptionBudgetNamespace: openshift-machine-api # healthChecks # Valid values for daemons are 'mon', 'osd', 'status' healthCheck: daemonHealth: mon: disabled: false interval: 45s osd: disabled: false interval: 60s status: disabled: false interval: 60s # Change pod liveness probe, it works for all mon,mgr,osd daemons livenessProbe: mon: disabled: false mgr: disabled: false osd: disabled: false
[root@k8s-master01 rook]# kubectl get -f cluster/examples/kubernetes/ceph/cluster.yaml
[root@k8s-master01 ~]# kubectl get cephcluster -n rook-ceph
NAME        DATADIRHOSTPATH   MONCOUNT   AGE   PHASE         MESSAGE                 HEALTH        EXTERNAL
rook-ceph   /var/lib/rook     3          95m   Progressing   Configuring Ceph OSDs   HEALTH_WARN
 

 三、安装Ceph snapshot控制器

主要完成控制PVC快照功能,这里需要切换一下分支

[root@k8s-master01 ~]# cd /root/k8s-ha-install

[root@k8s-master01 k8s-ha-install]# git remote -v
origin https://github.com/dotbalo/k8s-ha-install.git (fetch)
origin https://github.com/dotbalo/k8s-ha-install.git (push)

[root@k8s-master01 k8s-ha-install]# git checkout manual-installation-v1.21.x

[root@k8s-master01 k8s-ha-install]# git branch
* manual-installation-v1.21.x
master

[root@k8s-master01 k8s-ha-install]# kubectl create -f snapshotter/ -n kube-system

[root@k8s-master01 k8s-ha-install]# kubectl get pod -n kube-system -l app=snapshot-controller

参考链接:https://rook.io/docs/rook/v1.6/ceph-csi-snapshot.html 

 四、部署Ceph客户端工具

Ceph集群部署之后就可以使用相应的工具对集群进行查看和操作,这些工具可以使用yum安装,也可以托管与K8s上面

[root@k8s-master01 rook]# cd cluster/examples/kubernetes/
[root@k8s-master01 kubernetes]# cd ceph/
[root@k8s-master01 ceph]# ls
ceph-client.yaml                  create-external-cluster-resources.py  filesystem.yaml                   object-multisite.yaml    pool.yaml
cluster-external-management.yaml  create-external-cluster-resources.sh  flex                              object-openshift.yaml    pre-k8s-1.16
cluster-external.yaml             csi                                   import-external-cluster.sh        object-test.yaml         rbdmirror.yaml
cluster-on-pvc.yaml               dashboard-external-https.yaml         monitoring                        object-user.yaml         rgw-external.yaml
cluster-stretched.yaml            dashboard-external-http.yaml          nfs-test.yaml                     object.yaml              scc.yaml
cluster-test.yaml                 dashboard-ingress-https.yaml          nfs.yaml                          operator-openshift.yaml  storageclass-bucket-delete.yaml
cluster.yaml                      dashboard-loadbalancer.yaml           object-bucket-claim-delete.yaml   operator.yaml            storageclass-bucket-retain.yaml
common-external.yaml              direct-mount.yaml                     object-bucket-claim-retain.yaml   operator.yaml-back       test-data
common-second-cluster.yaml        filesystem-ec.yaml                    object-ec.yaml                    osd-purge.yaml           toolbox-job.yaml
common.yaml                       filesystem-mirror.yaml                object-external.yaml              pool-ec.yaml             toolbox.yaml
crds.yaml                         filesystem-test.yaml                  object-multisite-pull-realm.yaml  pool-test.yaml
创建toolbox pod客户端
[root@k8s
-master01 ceph]# kubectl create -f toolbox.yaml -n rook-ceph deployment.apps/rook-ceph-tools created
检查tools容器是否正常启动

[root@k8s-master01 ceph]# kubectl get pod -n rook-ceph -l app=rook-ceph-tools
NAME READY STATUS RESTARTS AGE
rook-ceph-tools-fc5f9586c-mql8d 1/1 Running 0 66s

进入tools容器

[root@k8s-master01 ceph]# kubectl -n rook-ceph exec -it deployment/rook-ceph-tools -- bash

查看ceph集群状态是否正常
[root@rook-ceph-tools-fc5f9586c-mql8d /]# ceph status
cluster:
id: d6b21555-bfc5-4aa8-a5bc-2e9ab0cfc3ec
health: HEALTH_WARN  #警告信息可以暂且忽略,后面可以去掉,这个信息提示允许一个不安全的端口对mons进行一个操作
mons are allowing insecure global_id reclaim
Reduced data availability: 1 pg inactive
OSD count 0 < osd_pool_default_size 3

services:
mon: 3 daemons, quorum a,b,c (age 2h)
mgr: a(active, since 2h)
osd: 3 osds: 3 up (since 2m), 3 in (since 2m)

data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 57 GiB / 60 GiB avail
pgs: 1 active+clean

查看ceph 集群osd磁盘状态

[root@rook-ceph-tools-fc5f9586c-f6wgr /]# ceph osd status
ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE
0 k8s-master03 1030M 18.9G 0 0 0 0 exists,up
1 k8s-node01 1030M 18.9G 0 0 0 0 exists,up
2 k8s-node02 1030M 18.9G 0 0 0 0 exists,up


[root@rook-ceph-tools-fc5f9586c-f6wgr /]# ceph df

 

 五、配置Ceph Dashboard

Ps: 默认情况下,Ceph已经存在service,默认是CLuster IP类型,无法在K8s节点之外的主机访问,虽然可以修改成NodePort不建议直接对其修改成NodePort,因此需要新建一个service,并改成Nodeport类型,对外开放

[root@k8s-master01 ceph]# vim dashboard-ceph.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: rook-ceph-mgr
    ceph_daemon_id: a
    rook_cluster: rook-ceph
  name: rook-ceph-mgr-dashboard-np
  namespace: rook-ceph
spec:
  ports:
  - name: http-dashboard
    port: 7000
    protocol: TCP
    targetPort: 7000
  selector:
    app: rook-ceph-mgr
    ceph_daemon_id: a
    rook_cluster: rook-ceph
  sessionAffinity: None
  type: NodePort
#
[root@k8s-master01 ceph]# kubectl create -f dashboard-ceph.yaml

 查看rook-ceph命名空间下的secret的密码信息,并通过json结合based64进行解码解密获取登录CEPH平台的密码

[root@k8s-master01 ceph]# kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
QW=@c}Xn2Ol4.14B}4Tu

 六、登录验证

这里直接指定节点IP+暴露的端口即可访问该ceph

PS:登录之后会状态这里会有警告信息,按照ceph官方文档上可处理掉

 解决当前ceph界面警告信息

[root@k8s-master01 ceph]# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
[root@rook-ceph-tools-fc5f9586c-f6wgr /]# ceph config set mon auth_allow_insecure_global_id_reclaim false

七、Ceph块存储的使用

块存储一般用于一个Pod挂载一块存储使用,主要针对于一个应用程序使用

参考官方文档:https://rook.io/docs/rook/v1.6/ceph-block.html 

7.1、查看CSI驱动控制器类型

[root@k8s-master01 ceph]# kubectl get csidriver
NAME                            ATTACHREQUIRED   PODINFOONMOUNT   STORAGECAPACITY   TOKENREQUESTS   REQUIRESREPUBLISH   MODES        AGE
rook-ceph.cephfs.csi.ceph.com   true             false            false             <unset>         false               Persistent   8h
rook-ceph.rbd.csi.ceph.com      true             false            false             <unset>         false               Persistent   8h

7.2、创建Storageclass和ceph的存储池

#vim csi/rbd/storageclass.yaml
apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: failureDomain: host replicated: size: 3 requireSafeReplicaSize: true --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-block provisioner: rook-ceph.rbd.csi.ceph.com parameters: pool: replicapool imageFormat: "2" imageFeatures: layering csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/fstype: ext4 allowVolumeExpansion: true reclaimPolicy: Delete
[root@k8s-master01 ceph]# kubectl create -f csi/rbd/storageclass.yaml
cephblockpool.ceph.rook.io/replicapool created
storageclass.storage.k8s.io/rook-ceph-block created

 

 

 查看块存储类型

 此时可以在Ceph平台上看到块存储池

 

 

 7.3、挂载测试

7.3.1、首先创建PVC存储卷

# vim /opt/pvc-mysql.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql
-pv-claim #PVC名称 labels: app: wordpress spec: storageClassName: rook-ceph-block #指定storageclass 块存储名称 accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
[root@k8s-master01 kubernetes]# kubectl create -f /opt/pvc/mysql-pvc.yaml
persistentvolumeclaim/mysql-pv-claim created

7.3.2、创建名为wordpress-mysql的Service、pod的资源

[root@k8s-master01 kubernetes]# vim mysql.yaml
apiVersion: v1
kind: Service
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  ports:
    - port: 3306
  selector:
    app: wordpress
    tier: mysql
  clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
    tier: mysql
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
        - image: mysql:5.6
          name: mysql
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: changeme
          ports:
            - containerPort: 3306
              name: mysql
          volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql
      volumes:
        - name: mysql-persistent-storage #指定mysql挂载数据卷volume名称
          persistentVolumeClaim:
            claimName: mysql-pv-claim #指定PVC名称
[root@k8s-master01 kubernetes]# kubectl get -f mysql.yaml

此时mysql这个pod已经挂载上了ceph块存储

拓展案例

如果是StatefulSet对象资源的话,可以通过"volumeClaimTemplates"参数直接指定storageclass名称,不用特意创建PVC,默认会为各个实例自动创建一个PVC分别挂载

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "rook-ceph-block" #直接指定storageclass 块存储名称
      resources:
        requests:
          storage: 1Gi
八、共享文件型存储
共享文件存储一般用于多个POD共享一个存储
官方文档:https://rook.io/docs/rook/v1.6/ceph-filesystem.html
CephFilesystem通过CRD中的元数据池、数据池和元数据服务器指定所需设置来创建文件系统,在此示例中,创建了具有三个复制的元数据池和一个具有三个复制的单个数据池,官方详解:https://rook.io/docs/rook/v1.6/ceph-filesystem-crd.html
8.1、创建共享类型的文件系统

[root@k8s-master01 ceph]# pwd
/root/rook/rook/cluster/examples/kubernetes/ceph
[root@k8s-master01 ceph]# kubectl create -f filesystem.yam

[root@k8s-master01 ceph]# egrep -v "#|^$" filesystem.yaml
apiVersion: ceph.rook.io/v1 kind: CephFilesystem metadata: name: myfs spec: metadataPool: #用于创建文件系统元数据池的设置,必须使用复制 replicated: size: 3 requireSafeReplicaSize: true parameters: compression_mode: none dataPools:  #创建文件系统数据池的设置,如果指定了多个池,rook会将这些池添加到文件系统中,将用户或者文件分配到池中。 - failureDomain: host replicated: size: 3 requireSafeReplicaSize: true parameters: compression_mode: none preserveFilesystemOnDelete: true #如果为“true”,则删除CephFilesystem资源时文件系统将保留,这是一种安全措施,可避免意外删除数据 metadataServer: activeCount: 1 activeStandby: true placement: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - rook-ceph-mds topologyKey: kubernetes.io/hostname preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - rook-ceph-mds topologyKey: topology.kubernetes.io/zone annotations: labels: resources: mirroring: enabled: false
[root@k8s-master01 ceph]# kubectl create -f filesystem.yaml
[root@k8s-master01 ceph]# kubectl get pod  -n rook-ceph | grep mds
rook-ceph-mds-myfs-a-667cfddcd4-hntmk                    1/1     Running     0          4m3s
rook-ceph-mds-myfs-b-79d97d8686-l48s8                    1/1     Running     0          4m2s
下图可以看出CEPH文件共享存储池已经创建完成,myfs-data0主要用于存储数据,myfs-metadata主要用于存储元数据信息的。与块存储池不同

 8.2、创建共享类文件系统的StorageClass存储

[root@k8s-master01 kubernetes]# vim   ceph/csi/cephfs/storageclass.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-cephfs provisioner: rook-ceph.cephfs.csi.ceph.com # driver:namespace:operator parameters: # clusterID is the namespace where operator is deployed. clusterID: rook-ceph # namespace:cluster # CephFS filesystem name into which the volume shall be created fsName: myfs # Ceph pool into which the volume shall be created # Required for provisionVolume: "true" pool: myfs-data0 # The secrets contain Ceph admin credentials. These are generated automatically by the operator # in the same namespace as the cluster. csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster # (optional) The driver can use either ceph-fuse (fuse) or ceph kernel client (kernel) # If omitted, default volume mounter will be used - this is determined by probing for ceph-fuse # or by setting the default mounter explicitly via --volumemounter command-line argument. # mounter: kernel reclaimPolicy: Delete allowVolumeExpansion: true mountOptions: # uncomment the following line for debugging #- debug
[root@k8s-master01 kubernetes]# kubectl create -f ceph/csi/cephfs/storageclass.yaml
storageclass.storage.k8s.io/rook-cephfs created

 

 8.3、创建PVC存储卷

[root@k8s-master01 kubernetes]# vim cephfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pvc
  namespace: kube-system
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: rook-cephfs #指定共享文件存储类型名称
[root@k8s-master01 kubernetes]# kubectl create -f cephfs-pvc.yaml

8.4、挂载测试1

创建registry实例资源,并将其挂在到上述文件共享存储PVC


[root@k8s-master01 kubernetes]# vim  kube-registry.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-registry
  namespace: kube-system
  labels:
    k8s-app: kube-registry
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 3
  selector:
    matchLabels:
      k8s-app: kube-registry
  template:
    metadata:
      labels:
        k8s-app: kube-registry
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
        - name: registry
          image: registry:2
          imagePullPolicy: Always
          resources:
            limits:
              cpu: 100m
              memory: 100Mi
          env:
            # Configuration reference: https://docs.docker.com/registry/configuration/
            - name: REGISTRY_HTTP_ADDR
              value: :5000
            - name: REGISTRY_HTTP_SECRET
              value: "Ple4seCh4ngeThisN0tAVerySecretV4lue"
            - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
              value: /var/lib/registry
          volumeMounts:
            - name: image-store
              mountPath: /var/lib/registry
          ports:
            - containerPort: 5000
              name: registry
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /
              port: registry
          readinessProbe:
            httpGet:
              path: /
              port: registry
      volumes:
        - name: image-store
          persistentVolumeClaim:
            claimName: cephfs-pvc
            readOnly: false
[root@k8s-master01 kubernetes]# kubectl create -f  kube-registry.yaml
deployment.apps/kube-registry created

[root@k8s-master01 kubernetes]# kubectl get pod -n kube-system | grep registry
kube-registry-5d6d8877f7-5r29t 1/1 Running 0 6m30s
kube-registry-5d6d8877f7-fbgmv 1/1 Running 0 6m30s
kube-registry-5d6d8877f7-wpz89 1/1 Running 0 6m30s

 8.5、验证所创建的pod是否共享数据

此时多个kube-registry pod所挂在的目录都是都是同一块文件存储,因此假设在第一个pod挂载目录创建测试文件,那么另一个pod同样会有,如下图所示

 在任意一个pod创建1.txt文件,随后验证其他pod是否共享

 8.5、挂载测试2

创建CEPH共享文件存储PVC

#vim nginx-pvc.yaml
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nginx-share-pvc spec: storageClassName: rook-cephfs accessModes: - ReadWriteMany resources: requests: storage: 1Gi

 创建Nginx示例,并将其挂载PVC上

#vim nginx.yaml
apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web selector: app: nginx type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: web spec: selector: matchLabels: app: nginx # has to match .spec.template.metadata.labels replicas: 3 # by default is 1 template: metadata: labels: app: nginx # has to match .spec.selector.matchLabels spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumes: - name: www persistentVolumeClaim: claimName: nginx-share-pvc
[root@k8s-master01 kubernetes]# kubectl create -f nginx.yaml

 

 九、CephFS快照

这里演示一下创建文件共享存储快照

9.1、创建Volumesnapshotclass,创建CephFS快照控制器(CephFS快照存储类型),并指定rook-ceph命名空间

[root@k8s-master01 kubernetes]# vim ceph/csi/cephfs/snapshotclass.yaml

---
# 1.17 <= K8s <= v1.19
# apiVersion: snapshot.storage.k8s.io/v1beta1
# K8s >= v1.20
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: csi-cephfsplugin-snapclass  #定义CephFS驱动控制器的名称
driver: rook-ceph.cephfs.csi.ceph.com # 指定CSI驱动程序,在这里定义CephFS驱动(可通过kubectl get csidriver查看当前所支持的CSI驱动)
parameters:
  clusterID: rook-ceph # 定义命名空间rook-ceph
  csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph # 指定控制器secret命名空间
deletionPolicy: Delete #定义集群删除策略

[root@k8s-master01 kubernetes]# kubectl create -f  ceph/csi/cephfs/snapshotclass.yaml

 创建完成之后验证CephFS控制器是否创建成功

通过kubectl get volumesnapshotclass可以查看当前的存储类型(包括刚刚创建的快照控制器)

9.2、创建卷快照

 [root@k8s-master01 kubernetes]# vim ceph/csi/cephfs/snapshot.yaml

---
# 1.17 <= K8s <= v1.19
# apiVersion: snapshot.storage.k8s.io/v1beta1
# K8s >= v1.20
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: cephfs-pvc-snapshot  #定义PVC快照名称
spec:
  volumeSnapshotClassName: csi-cephfsplugin-snapclass
  source:
    persistentVolumeClaimName: nginx-share-pvc #指定PVC数据源,一般都是指定需要被创建快照PVC的名称(可通过该命名kubectl get pvc查看)

[root@k8s-master01 kubernetes]# kubectl create -f  ceph/csi/cephfs/snapshot.yaml

验证基于CephFS PVC所创建的快照是否完成 

 属性解析:

NAME:快照名称

READYTOUSE:快照状态,true表示可用

SOURCEPVC:源PVC(也就是被创建快照的PVC)

SOURCESNAPSHOTCONTENT:源快照内容

RESTORESIZE:快照恢复大小

SNAPSHOTCLASS: 快照控制器存储类型名称

SNAPSHOTCONTENT:快照内容

CREATIONTIME:创建时间

 此时可以在CEPH平台上查看快照信息

 如何确认快照和volumeshot绑定的关系呢? 

快照创建顺序:storageclass----->PVC---->volumesnapshotclass(快照控制器存储类型)---->volumesnapshot(卷快照)---->volumesnapshotcontent(快照数据内容)---->snapshotHandle(快照句柄)

首先创建完快照之后会有一个资源对象(volumesnapshotcontent),这里面存储PVC快照的数据,通过指定以下命名可以查看快照句柄

#kubectl get storageclass rook-cephfs
NAME          PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-cephfs   rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   4d19h
#kubectl get pvc nginx-share-pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginx-share-pvc   Bound    pvc-d2f89295-6304-4ee9-b8f3-f70a6c9225ef   2Gi        RWX            rook-cephfs    4d5h
#kubectl get volumesnapshotclass
NAME                         DRIVER                          DELETIONPOLICY   AGE
csi-cephfsplugin-snapclass   rook-ceph.cephfs.csi.ceph.com   Delete           4d5h
rook-ceph-block              rook-ceph.rbd.csi.ceph.com      Delete           4d21h
rook-cephfs                  rook-ceph.cephfs.csi.ceph.com   Delete           4d19h
#kubectl get volumesnapshotclass csi-cephfsplugin-snapclass
NAME                         DRIVER                          DELETIONPOLICY   AGE
csi-cephfsplugin-snapclass   rook-ceph.cephfs.csi.ceph.com   Delete           4d5h

#kubectl get volumesnapshot
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
cephfs-pvc-snapshot true nginx-share-pvc 2Gi csi-cephfsplugin-snapclass snapcontent-68f6da76-e63d-4ae1-a467-4d4ab961d1a6 4d4h 4d4h

#kubectl get volumesnapshotcontent snapcontent-68f6da76-e63d-4ae1-a467-4d4ab961d1a6 -o jsonpath="{['status'] ['snapshotHandle']}" && echo

下图可以发现快照句柄和CEPH snapshots是对应的

  

9.3、PVC快照数据回滚

指定手动删除pod副本,模拟数据丢失场景

[root@k8s-master01 kubernetes]# kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
busybox                            1/1     Running   1667       70d
web-0                              1/1     Running   0          15h
web-1                              1/1     Running   0          15h
web-2                              1/1     Running   0          15h
web-7bf54cbc8d-qn6ql               1/1     Running   0          70m
web-7bf54cbc8d-spnsz               1/1     Running   0          70m
web-7bf54cbc8d-t5656               1/1     Running   0          70m
wordpress-7b989dbf57-kvzg4         1/1     Running   0          15h
wordpress-mysql-6965fc8cc8-k6tjj   1/1     Running   0          16h
[root@k8s-master01 kubernetes]# kubectl exec -it web-7bf54cbc8d-qn6ql -- ls /usr/share/nginx/html
index.html
[root@k8s-master01 kubernetes]# kubectl exec -it web-7bf54cbc8d-qn6ql -- cat /usr/share/nginx/html/index.html
The is ok
[root@k8s-master01 kubernetes]# kubectl exec -it web-7bf54cbc8d-qn6ql -- rm -f  /usr/share/nginx/html/index.html
[root@k8s-master01 kubernetes]# kubectl exec -it web-7bf54cbc8d-qn6ql -- ls /usr/share/nginx/html/
数据回滚
其实数据回滚就是创建一个新的PVC,并指定已经创建完PVC快照的volumensnapshot,然后将POd挂载到新PVC上

 指定快照创建PVC


[root@k8s-master01 cephfs]# vim  pvc-restore.yaml
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-restore spec: storageClassName: rook-cephfs dataSource: name: cephfs-pvc-snapshot #指定快照名称 kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteMany resources: requests: storage: 2Gi
[root@k8s-master01 cephfs]# kubectl create -f   pvc-restore.yaml
创建pod副本,并指定新回滚PVC 
[root@k8s-master01 cephfs]# vim  pod.yaml
--- apiVersion: v1 kind: Pod metadata: name: csicephfs-demo-pod spec: containers: - name: web-server image: nginx imagePullPolicy: IfNotPresent volumeMounts: - name: mypvc mountPath: /var/lib/www/html volumes: - name: mypvc persistentVolumeClaim: claimName: cephfs-pvc-restore readOnly: false
[root@k8s-master01 cephfs]# kubectl create -f pod.yaml

 

 可以看出新POD目录已经存在快照数据,此时可以通过拷贝到之前删除的pod副本中,如下所示

[root@k8s-master01 cephfs]# kubectl exec csicephfs-demo-pod -- tar cPf - /var/lib/www/html/index.html | sudo tar xf - -C  .
[root@k8s-master01 cephfs]# cat  var/lib/www/html/index.html
The is ok
[root@k8s-master01 cephfs]# kubectl cp var/lib/www/html/index.html  web-7bf54cbc8d-qn6ql:/usr/share/nginx/html/

[root@k8s-master01 cephfs]# kubectl exec -it web-7bf54cbc8d-qn6ql -- cat /usr/share/nginx/html/index.html
The is ok
[root@k8s-master01 cephfs]# kubectl exec -it web-7bf54cbc8d-t5656 -- cat /usr/share/nginx/html/index.html
The is ok

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章