K8S實踐Ⅵ(存儲卷)

一、EmptyDir

使用emptyDir,當Pod分配到Node上時,將會創建emptyDir,並且只要Node上的Pod一直運行,Volume就會一直存在。當Pod從Node上被刪除時,emptyDir也同時會刪除,存儲的數據也將永久刪除。

1.創建一個實例

# cat pod-emptydir.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: my-demo
  namespace: default
  labels:
    name: myapp
    tier: appfront
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
    volumeMounts:
    - name: empty1
      mountPath: /usr/share/nginx/html/
  - name: busybox
    image: busybox
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: empty1
      mountPath: /data/
    command:
    - "/bin/sh"
    - "-c"
    - "while true; do echo $(date) >> /data/index.html; sleep 10; done"
  volumes:
  - name: empty1
    emptyDir: {}

該實例中創建了兩個容器,其中一個輸入日期到index.html中,然後驗證訪問nginx的html是否可以獲取日期。以驗證兩個容器之間掛載的emptyDir實現共享。

2.驗證

# kubectl get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP          NODE    NOMINATED NODE   READINESS GATES
my-demo   2/2     Running   0          2m    10.42.0.1   k8s-5   <none>           <none>
# curl 10.42.0.1
Tue Jul 2 16:49:39 UTC 2019
Tue Jul 2 16:49:49 UTC 2019
Tue Jul 2 16:49:59 UTC 2019
Tue Jul 2 16:50:09 UTC 2019

二、HostPath

掛載Node上的文件系統到Pod裏面去,如果Pod需要使用Node上的文件,可以使用hostPath,在pod刪除時,存儲數據不會丟失。

1.創建一個實例

# cat pod-hostpath.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-hostpath
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: volume2
      mountPath: /usr/share/nginx/html
  volumes:
    - name: volume2
      hostPath:
        path: /data/pod/volume2
        type: DirectoryOrCreate

type: DirectoryOrCreate,如果不存在該目錄就創建,其它的類型還有:Directory,FileOrCreate,File,BlockDevice等

# kubectl get pod -o wide
NAME           READY   STATUS    RESTARTS   AGE     IP          NODE    NOMINATED NODE   READINESS GATES
pod-hostpath   1/1     Running   0          5m49s   10.40.0.1   k8s-4   <none>           <none>

2.切換到pod所在node,往本地目錄下寫入一個文件

[root@K8S-4 ~]# touch /data/pod/volume2/index.html
[root@K8S-4 ~]# echo $(date) >> /data/pod/volume2/index.html

3.驗證數據

# curl 10.40.0.1
Wed Jul 3 01:10:34 CST 2019

4.刪除該Pod查看數據是否還在

# kubectl delete -f pod-hostpath.yaml 
pod "pod-hostpath" deleted
[root@K8S-4 ~]# cat /data/pod/volume2/index.html
Wed Jul 3 01:10:34 CST 2019

三、NFS

通過配置掛載NFS到Pod中,NFS中的數據是可以永久保存的,同時NFS支持同時寫操作。Pod被刪除時,內容不會被刪除,僅僅是解除掛在狀態而已,這就意味着NFS能夠允許我們提前對數據進行處理,而且這些數據可以在Pod之間相互傳遞.並且,NFS可以同時被多個pod掛載並進行讀寫

1.在任一節點上安裝nfs,並配置

[root@K8S-5 ~]# yum install -y nfs-utils
[root@K8S-5 ~]# mkdir -p /data/nfs
[root@K8S-5 ~]# vim /etc/exports
/data/nfs 20.0.20.0/24(rw,no_root_squash)
[root@K8S-5 ~]# systemctl start nfs
[root@K8S-5 ~]# showmount -e
Export list for K8S-5:
/data/nfs 20.0.20.0/24

2.在另一節點測試掛載

[root@K8S-4 ~]#  yum install -y nfs-utils
[root@K8S-4 ~]# mount -t nfs K8S-5:/data/nfs /mnt
[root@K8S-4 ~]# df -h
。。。
K8S-5:/data/nfs           50G  2.3G   48G   5% /mnt

3.創建實例

# cat pod-nfs.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-nfs
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: nfs
      mountPath: /usr/share/nginx/html
  volumes:
    - name: nfs
      nfs:
        path: /data/nfs
        server: K8S-5
# kubectl get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP          NODE    NOMINATED NODE   READINESS GATES
pod-nfs   1/1     Running   0          8s    10.42.0.1   k8s-5   <none>           <none>

需要在pod所在的節點安裝nfs才能掛載成功

4.在nfs服務器上創建測試文件,並測試

[root@K8S-5 ~]# echo $(date) >> /data/nfs/index.html
# curl 10.42.0.1
Wed Jul 3 01:52:11 CST 2019

四、PV&PVC

  • PersistentVolume(PV) 是 Volume 之類的卷插件,也是集羣中的資源,但獨立於Pod的生命週期(即不會因Pod刪除而被刪除),不歸屬於某個Namespace。
  • PersistentVolumeClaim(PVC)是用戶存儲的請求,PVC消耗PV的資源,可以請求特定的大小和訪問模式,需要指定歸屬於某個Namespace,在同一個Namespace的Pod纔可以指定對應的PVC。

創建一個nfs使用PV和PVC的實例:

1.配置nfs存儲

[root@K8S-5 nfs]# mkdir v{1,2,3}
[root@K8S-5 nfs]# ls
v1  v2  v3
[root@K8S-5 nfs]# vim /etc/exports
/data/nfs/v1 20.0.20.0/24(rw,no_root_squash)
/data/nfs/v2 20.0.20.0/24(rw,no_root_squash)
/data/nfs/v3 20.0.20.0/24(rw,no_root_squash)
[root@K8S-5 nfs]# exportfs -arv
exporting 20.0.20.0/24:/data/nfs/v3
exporting 20.0.20.0/24:/data/nfs/v2
exporting 20.0.20.0/24:/data/nfs/v1
[root@K8S-5 nfs]# showmount -e
Export list for K8S-5:
/data/nfs/v3 20.0.20.0/24
/data/nfs/v2 20.0.20.0/24
/data/nfs/v1 20.0.20.0/24

2.定義PV

# cat pv-nfs.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv001
  labels:
    name: pv001
spec:
  nfs:
    path: /data/nfs/v1
    server: K8S-5
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv002
  labels:
    name: pv002
spec:
  nfs:
    path: /data/nfs/v2
    server: K8S-5
  accessModes: ["ReadWriteOnce"]
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv003
  labels:
    name: pv003
spec:
  nfs:
    path: /data/nfs/v3
    server: K8S-5
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 3Gi
# kubectl apply -f pv-nfs.yaml 
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv001   1Gi        RWO,RWX        Retain           Available                                   19s
pv002   2Gi        RWO            Retain           Available                                   19s
pv003   3Gi        RWO,RWX        Retain           Available                                   19s
  • 訪問模式包括:
    ReadWriteOnce——該卷可以被單個節點以讀/寫模式掛載
    ReadOnlyMany——該卷可以被多個節點以只讀模式掛載
    ReadWriteMany——該卷可以被多個節點以讀/寫模式掛載
  • 在命令行中,訪問模式縮寫爲:
    RWO - ReadWriteOnce
    ROX - ReadOnlyMany
    RWX - ReadWriteMany
  • 一個卷一次只能使用一種訪問模式掛載,即使它支持很多訪問模式。

  • PV的回收策略可以設定PVC在釋放後如何處理對應的Volume,目前有 Retained(保留), Recycled(回收)和Deleted(刪除)三種策略
  • 要修改PV的回收策略,可執行以下命令
    kubectl patch pv <pv_name> -p ‘{“spec”:{“persistentVolumeReclaimPolicy”:“Retain”}}’

3.定義PVC

# cat pod-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
  namespace: default
spec:
  accessModes: ["ReadWriteMany"]
  resources:
    requests:
      storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-pvc
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: nfsvolume
      mountPath: /usr/share/nginx/html
  volumes:
    - name: nfsvolume
      persistentVolumeClaim:
        claimName: mypvc
# kubectl apply -f pod-pvc.yaml 
persistentvolumeclaim/mypvc created
pod/pod-pvc created
# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
pv001   1Gi        RWO,RWX        Retain           Available                                           6m34s
pv002   2Gi        RWO            Retain           Available                                           6m34s
pv003   3Gi        RWO,RWX        Retain           Bound       default/mypvc                           6m34s
# kubectl get pvc
NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc   Bound    pv003    3Gi        RWO,RWX                       14s
# k get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP          NODE    NOMINATED NODE   READINESS GATES
pod-pvc   1/1     Running   0          4s    10.42.0.1   k8s-5   <none>           <none>

4.訪問測試

在nfs上創建index.html,並寫入數據

[root@K8S-5 nfs]# echo $(date) >> /data/nfs/v3/index.html
# curl 10.42.0.1
Wed Jul 3 18:42:33 CST 2019

五、StorageClass

StorageClass作爲存儲資源的抽象定義,對用戶設置的PVC申請屏蔽後端細節,減少了手工管理PV的工作,由系統自動完成PV的創建和綁定,實現了動態的資源供應。
例如:在存儲系統中劃分一個1TB的存儲空間提供給Kubernetes使用,當用戶需要一個10G的PVC時,會立即通過restful發送請求,從而讓存儲空間創建一個10G的image,之後在我們的集羣中定義成10G的PV供給給當前的PVC作爲掛載使用。

配置示例:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
reclaimPolicy: Retain
mountOptions:
  - debug

六、配置一個GlusterFS的動態存儲實例

參考文檔:GlusterFS Kubernetes

1.環境準備
①節點信息

# kubectl get node -o wide
NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
k8s-1   Ready    master   34d   v1.14.2   20.0.20.101   <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://18.9.6
k8s-2   Ready    <none>   33d   v1.14.2   20.0.20.102   <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://18.9.6
k8s-3   Ready    <none>   33d   v1.14.2   20.0.20.103   <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://18.9.6
k8s-4   Ready    <none>   15h   v1.14.2   20.0.20.104   <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://18.9.6
k8s-5   Ready    <none>   15h   v1.14.2   20.0.20.105   <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://18.9.6

②設置label

部署過程中將使用3,4,5作爲GlusterFS的三個節點,設置這三個節點的label,以便於在這三個節點上啓動glusterFS pod

[root@K8S-1 ~]# kubectl label node k8s-3 storagenode=glusterfs
node/k8s-3 labeled
[root@K8S-1 ~]# kubectl label node k8s-4 storagenode=glusterfs
node/k8s-4 labeled
[root@K8S-1 ~]# kubectl label node k8s-5 storagenode=glusterfs
node/k8s-5 labeled
[root@K8S-1 ~]# kubectl get node -L storagenode
NAME    STATUS   ROLES    AGE   VERSION   STORAGENODE
k8s-1   Ready    master   34d   v1.14.2   
k8s-2   Ready    <none>   33d   v1.14.2   
k8s-3   Ready    <none>   33d   v1.14.2   glusterfs
k8s-4   Ready    <none>   16h   v1.14.2   glusterfs
k8s-5   Ready    <none>   16h   v1.14.2   glusterfs

③安裝GFS客戶端

想要正常的在kubernetes集羣中使用或者掛載glusterfs,集羣中的對應節點都需要安裝 glusterfs-fuse

# yum install -y glusterfs glusterfs-fuse

④創建模擬磁盤

使用Heketi管理GFS,需要所對應的節點上有一塊空白磁盤,此處使用loop device來模擬磁盤;loop device在操作系統重啓後可能會被卸載,導致GlusterFS無法正常使用。

[root@K8S-2 ~]# mkdir -p /home/glusterfs
[root@K8S-2 ~]# cd /home/glusterfs/
[root@K8S-2 glusterfs]# dd if=/dev/zero of=gluster.disk bs=1024 count=$(( 1024 * 1024 * 20 ))
20971520+0 records in
20971520+0 records out
21474836480 bytes (21 GB) copied, 1264.7 s, 17.0 MB/s
[root@K8S-2 glusterfs]# losetup -l
NAME       SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop0         0      0         0  0 /home/glusterfs/gluster.disk

⑤下載gluster-kubernetes

git clone https://github.com/gluster/gluster-kubernetes.git

2.部署GFS

[root@K8S-1 ~]# cd gluster-kubernetes/deploy/kube-templates/
[root@K8S-1 kube-templates]# cat glusterfs-daemonset.yaml 
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: glusterfs
  labels:
    glusterfs: daemonset
  annotations:
    description: GlusterFS DaemonSet
    tags: glusterfs
spec:
  template:
    metadata:
      name: glusterfs
      labels:
        glusterfs: pod
        glusterfs-node: pod
    spec:
      nodeSelector:
        storagenode: glusterfs
      hostNetwork: true
      containers:
      - image: gluster/gluster-centos:latest
        imagePullPolicy: IfNotPresent
        name: glusterfs
        env:
        # alternative for /dev volumeMount to enable access to *all* devices
        - name: HOST_DEV_DIR
          value: "/mnt/host-dev"
        # set GLUSTER_BLOCKD_STATUS_PROBE_ENABLE to "1" so the
        # readiness/liveness probe validate gluster-blockd as well
        - name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE
          value: "1"
        - name: GB_GLFS_LRU_COUNT
          value: "15"
        - name: TCMU_LOGDIR
          value: "/var/log/glusterfs/gluster-block"
        resources:
          requests:
            memory: 100Mi
            cpu: 100m
        volumeMounts:
        - name: glusterfs-heketi
          mountPath: "/var/lib/heketi"
        - name: glusterfs-run
          mountPath: "/run"
        - name: glusterfs-lvm
          mountPath: "/run/lvm"
        - name: glusterfs-etc
          mountPath: "/etc/glusterfs"
        - name: glusterfs-logs
          mountPath: "/var/log/glusterfs"
        - name: glusterfs-config
          mountPath: "/var/lib/glusterd"
        - name: glusterfs-host-dev
          mountPath: "/mnt/host-dev"
        - name: glusterfs-misc
          mountPath: "/var/lib/misc/glusterfsd"
        - name: glusterfs-block-sys-class
          mountPath: "/sys/class"
        - name: glusterfs-block-sys-module
          mountPath: "/sys/module"
        - name: glusterfs-cgroup
          mountPath: "/sys/fs/cgroup"
          readOnly: true
        - name: glusterfs-ssl
          mountPath: "/etc/ssl"
          readOnly: true
        - name: kernel-modules
          mountPath: "/lib/modules"
          readOnly: true
        securityContext:
          capabilities: {}
          privileged: true
        readinessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 40
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi"
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 50
        livenessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 40
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh liveness; else systemctl status glusterd.service; fi"
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 50
      volumes:
      - name: glusterfs-heketi
        hostPath:
          path: "/var/lib/heketi"
      - name: glusterfs-run
      - name: glusterfs-lvm
        hostPath:
          path: "/run/lvm"
      - name: glusterfs-etc
        hostPath:
          path: "/etc/glusterfs"
      - name: glusterfs-logs
        hostPath:
          path: "/var/log/glusterfs"
      - name: glusterfs-config
        hostPath:
          path: "/var/lib/glusterd"
      - name: glusterfs-host-dev
        hostPath:
          path: "/dev"
      - name: glusterfs-misc
        hostPath:
          path: "/var/lib/misc/glusterfsd"
      - name: glusterfs-block-sys-class
        hostPath:
          path: "/sys/class"
      - name: glusterfs-block-sys-module
        hostPath:
          path: "/sys/module"
      - name: glusterfs-cgroup
        hostPath:
          path: "/sys/fs/cgroup"
      - name: glusterfs-ssl
        hostPath:
          path: "/etc/ssl"
      - name: kernel-modules
        hostPath:
          path: "/lib/modules"
[root@K8S-1 kube-templates]# kubectl create -f glusterfs-daemonset.yaml 
daemonset.extensions/glusterfs created
[root@K8S-1 kube-templates]# kubectl get pod
NAME              READY   STATUS    RESTARTS   AGE
glusterfs-l9cvv   0/1     Running   0          2m33s
glusterfs-mxnmd   1/1     Running   0          2m33s
glusterfs-w9gwt   0/1     Running   0          2m33s

3.部署Heketi

Heketi是一個提供RESTful API管理GlusterFS卷的框架,支持GlusterFSduo多集羣管理。

①創建topology.json文件

在Heketi能夠管理GFS集羣之前,要先爲其提供GlusterFS集羣拓撲信息。可以用topology.json來完成各個GFS節點和設備的定義。

[root@K8S-1 kube-templates]# cd ..
[root@K8S-1 deploy]# cp topology.json.sample topology.json
[root@K8S-1 deploy]# cp topology.json.sample topology.json
[root@K8S-1 deploy]# vim topology.json
[root@K8S-1 deploy]# cat topology.json
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "K8S-3"
              ],
              "storage": [
                "20.0.20.103"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/loop0"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "K8S-4"
              ],
              "storage": [
                "20.0.20.104"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/loop0"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "K8S-5"
              ],
              "storage": [
                "20.0.20.105"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/loop0"
          ]
        }
      ]
    }
  ]
}

②使用GlusterFS Kubernetes項目中的gk-deploy來創建Heketi

該腳本中有一處使用了kubectl get pod --show-all選項,當前版本移除了--show-all選項,需要在腳本中刪除

[root@K8S-1 deploy]# ./gk-deploy
......
[Y]es, [N]o? [Default: Y]: 
Using Kubernetes CLI.
Using namespace "default".
Checking for pre-existing resources...
  GlusterFS pods ... found.
  deploy-heketi pod ... not found.
  heketi pod ... not found.
  gluster-s3 pod ... not found.
Creating initial resources ... serviceaccount/heketi-service-account created
clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view created
clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view labeled
OK
secret/heketi-config-secret created
secret/heketi-config-secret labeled
service/deploy-heketi created
deployment.extensions/deploy-heketi created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: f93b7411594f33075a762c1f11c48b9e
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node k8s-3 ... ID: 80faa4b3ab2ad0af09161151e02fce7c
Adding device /dev/loop0 ... OK
Creating node k8s-4 ... ID: 50351cd51945c4094c4cbfb6dbd9ed0c
Adding device /dev/loop0 ... OK
Creating node k8s-5 ... ID: 71a2df81f463f7733d8af18ff0e38a7d
Adding device /dev/loop0 ... OK
heketi topology loaded.
Saving /tmp/heketi-storage.json
secret/heketi-storage-secret created
endpoints/heketi-storage-endpoints created
service/heketi-storage-endpoints created
job.batch/heketi-storage-copy-job created
service/heketi-storage-endpoints labeled
pod "deploy-heketi-865f55765-j6ghj" deleted
service "deploy-heketi" deleted
deployment.apps "deploy-heketi" deleted
replicaset.apps "deploy-heketi-865f55765" deleted
job.batch "heketi-storage-copy-job" deleted
secret "heketi-storage-secret" deleted
service/heketi created
deployment.extensions/heketi created
Waiting for heketi pod to start ... OK
heketi is now running and accessible via http://10.40.0.1:8080 . To run
administrative commands you can install 'heketi-cli' and use it as follows:

  # heketi-cli -s http://10.40.0.1:8080 --user admin --secret '<ADMIN_KEY>' cluster list

You can find it at https://github.com/heketi/heketi/releases . Alternatively,
use it from within the heketi pod:

  # /usr/bin/kubectl -n default exec -i heketi-85dbbbb55-cvfsp -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list

For dynamic provisioning, create a StorageClass similar to this:

---
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: glusterfs-storage
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://10.40.0.1:8080"

Deployment complete!

③進入Heketi容器查看topology信息

[root@K8S-1 deploy]# kubectl exec -ti heketi-85dbbbb55-cvfsp -- /bin/bash
[root@heketi-85dbbbb55-cvfsp /]# heketi-cli topology info

Cluster Id: f93b7411594f33075a762c1f11c48b9e

    File:  true
    Block: true

    Volumes:

    Name: heketidbstorage
    Size: 2
    Id: d9ea60134a1f2ab16f0973034ab31110
    Cluster Id: f93b7411594f33075a762c1f11c48b9e
    Mount: 20.0.20.104:heketidbstorage
    Mount Options: backup-volfile-servers=20.0.20.105,20.0.20.103
    Durability Type: replicate
    Replica: 3
    Snapshot: Disabled

        Bricks:
            Id: 40719676ab880160ce872e3812775bdd
            Path: /var/lib/heketi/mounts/vg_aeb3d8cd14f81464471a9b8bb4a29b99/brick_40719676ab880160ce872e3812775bdd/brick
            Size (GiB): 2
            Node: 80faa4b3ab2ad0af09161151e02fce7c
            Device: aeb3d8cd14f81464471a9b8bb4a29b99

            Id: 9e6f91cfab59db365567cf6a394f3393
            Path: /var/lib/heketi/mounts/vg_5625e95ac4aa99f058e1953aafd74426/brick_9e6f91cfab59db365567cf6a394f3393/brick
            Size (GiB): 2
            Node: 50351cd51945c4094c4cbfb6dbd9ed0c
            Device: 5625e95ac4aa99f058e1953aafd74426

            Id: ff85ed6fc05c6dffbd6cae6609aff9b4
            Path: /var/lib/heketi/mounts/vg_ba5eb17eee4105a54764a6dcc0323f39/brick_ff85ed6fc05c6dffbd6cae6609aff9b4/brick
            Size (GiB): 2
            Node: 71a2df81f463f7733d8af18ff0e38a7d
            Device: ba5eb17eee4105a54764a6dcc0323f39

    Nodes:

    Node Id: 50351cd51945c4094c4cbfb6dbd9ed0c
    State: online
    Cluster Id: f93b7411594f33075a762c1f11c48b9e
    Zone: 1
    Management Hostnames: k8s-4
    Storage Hostnames: 20.0.20.104
    Devices:
        Id:5625e95ac4aa99f058e1953aafd74426   Name:/dev/loop0          State:online    Size (GiB):19      Used (GiB):2       Free (GiB):17      
            Bricks:
                Id:9e6f91cfab59db365567cf6a394f3393   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_5625e95ac4aa99f058e1953aafd74426/brick_9e6f91cfab59db365567cf6a394f3393/brick

    Node Id: 71a2df81f463f7733d8af18ff0e38a7d
    State: online
    Cluster Id: f93b7411594f33075a762c1f11c48b9e
    Zone: 1
    Management Hostnames: k8s-5
    Storage Hostnames: 20.0.20.105
    Devices:
        Id:ba5eb17eee4105a54764a6dcc0323f39   Name:/dev/loop0          State:online    Size (GiB):19      Used (GiB):2       Free (GiB):17      
            Bricks:
                Id:ff85ed6fc05c6dffbd6cae6609aff9b4   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_ba5eb17eee4105a54764a6dcc0323f39/brick_ff85ed6fc05c6dffbd6cae6609aff9b4/brick

    Node Id: 80faa4b3ab2ad0af09161151e02fce7c
    State: online
    Cluster Id: f93b7411594f33075a762c1f11c48b9e
    Zone: 1
    Management Hostnames: k8s-3
    Storage Hostnames: 20.0.20.103
    Devices:
        Id:aeb3d8cd14f81464471a9b8bb4a29b99   Name:/dev/loop0          State:online    Size (GiB):19      Used (GiB):2       Free (GiB):17      
            Bricks:
                Id:40719676ab880160ce872e3812775bdd   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_aeb3d8cd14f81464471a9b8bb4a29b99/brick_40719676ab880160ce872e3812775bdd/brick

4.定義StorageClass

# cat glusterfs-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gluster-volume
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://10.40.0.1:8080"
  restauthenabled: "flase"  
# kubectl get storageclass
NAME             PROVISIONER               AGE
gluster-volume   kubernetes.io/glusterfs   13s

5.定義PVC

# cat glusterfs-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfs-pvc
spec:
  storageClassName: gluster-volume
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

此處可見系統自動創建了PV

# kubectl get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS     REASON   AGE
persistentvolume/pvc-db9abc87-9e0a-11e9-a2f3-00505694834d   1Gi        RWX            Delete           Bound    default/glusterfs-pvc   gluster-volume            9s

NAME                                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
persistentvolumeclaim/glusterfs-pvc   Bound    pvc-db9abc87-9e0a-11e9-a2f3-00505694834d   1Gi        RWX            gluster-volume   26s

6.定義一個Pod使用PVC

# cat pod-usepvc.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: busybox
spec:
  containers:
    - image: busybox
      command:
        - sleep
        - "3600"
      name: busybox
      volumeMounts:
        - mountPath: /usr/share/busybox
          name: mypvc
  volumes:
    - name: mypvc
      persistentVolumeClaim:
        claimName: glusterfs-pvc
# kubectl exec -ti busybox -- /bin/sh
/ # df -h
shm                      64.0M         0     64.0M   0% /dev/shm
20.0.20.103:vol_7852b88167b5f961d1f7869674851490
                       1020.1M     42.8M    977.3M   4% /usr/share/busybox
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章