Kubernetes實錄(10) Kubernetes集成部署glusterfs+Heketi提供持久存儲PV(集成模式)

Kubernetes實錄系列記錄文檔完整目錄參考: Kubernetes實錄-目錄

相關記錄鏈接地址 :

前2篇文檔Kubernetes使用glusterfs提供持久存儲PV(手動模式)Kubernetes使用glusterfs+Heketi提供持久存儲PV(自動模式) 使用的glusterfs和Heketi都是獨立於kubernetes之外部署的。本文檔記錄使用容器模式與kubernetes集成部署。

一、Kubernetes環境

1.1 Kubernetes環境信息

主機名稱 ip地址 操作系統 角色 軟件版本 備註
ejucsmaster-shqs-1 10.99.12.201 CentOS 7.5 proxy, master glusterfs /dev/sd{b…e}
ejucsmaster-shqs-2 10.99.12.202 CentOS 7.5 proxy, master glusterfs /dev/sd{b…e}
ejucsmaster-shqs-3 10.99.12.203 CentOS 7.5 proxy, master glusterfs /dev/sd{b…e}
ejucsnode-shqs-1 10.99.12.204 CentOS 7.5 worker
ejucsnode-shqs-2 10.99.12.205 CentOS 7.5 worker
ejucsnode-shqs-2 10.99.12.206 CentOS 7.5 worker

kubernetes的部署可以參考本系列文檔,具體進入目標連接查看。

1.2 glusterfs使用的裸盤

範圍:kubernetes master節點

fdisk -l
	Disk /dev/sdb: 599.6 GB, 599550590976 bytes, 1170997248 sectors
	Disk /dev/sdc: 599.6 GB, 599550590976 bytes, 1170997248 sectors
	Disk /dev/sdd: 599.6 GB, 599550590976 bytes, 1170997248 sectors
	Disk /dev/sde: 599.6 GB, 599550590976 bytes, 1170997248 sectors

1.3 kubernetes節點glusterfs客戶端準備工作

1.3.1 加載模塊

範圍:所有kubernetes節點

touch /etc/sysconfig/modules/custom.modules
vi /etc/sysconfig/modules/custom.modules
modprobe dm_thin_pool
modprobe dm_snampshot
modprobe dm_mirror

chmod +x /etc/sysconfig/modules/custom.modules
source /etc/sysconfig/modules/custom.modules
lsmod |grep dm

1.3.2 kubernetes節點安裝glusterfs-fuse用於掛在存儲卷

這裏直接安裝最新版本5.3 也可以指定版本安裝。

範圍:所有kubernetes節點

yum install centos-release-gluster -y
yum install -y glusterfs-fuse

1.3.3 kubernetes節點(master)安裝heketi-cli工具

heketi-cli這個工具可以安裝在任何位置,這裏爲了方便管理員在kubernetes中管理和配置GlusterFS+heketi,就直接在經常使用kubectl命令操作kubernetes集羣的master節點上安裝(爲了方便所喲偶master節點都安裝了)
範圍:kubernetes master節點
後面在kubernetes集羣中部署heketi的版本是8.0.0 因此這裏heketi-cli也使用8.0版本.

yum install -y heketi-client 
rpm -qa |grep heketi-client
	heketi-client-8.0.0-1.el7.x86_64

1.4 glusterfs+heketi使用kubernetes命名空間約定

kubernetes是可以創建多個命名空間(namespace)的,有個一個特殊的命名空間叫kube-system,集羣服務一般都在該命名空間下。
這裏約定所有集羣本身服務組件以及之後創建的管理集羣或者爲整個其他提供服務去第三方組件或者服務都在該命名空間下創建並管理。

glusterfs+heketi與kubernetes集羣配置的github上的官方文檔都是沒有指定命名空間的,默認的命名空間是default不符合我們的要求。有2種方式可以操作,切換命名空間或者在配置文件中明確指定命名空間,本記錄文檔統一採用修改下載的配置文檔明確指定命名空間爲kube-system的方式:
所有資源對象的metadata部分添加:

"kind": "xxxx",
"apiVersion": "xxxxxxxxxx",
"metadata": {
        "namespace": "kube-system",
         ...
},
...

1.5 下載相關部署文件

github位置: https://github.com/heketi/heketi/blob/master/extras/kubernetes/

wget https://raw.githubusercontent.com/heketi/heketi/v8.0.0/extras/kubernetes/glusterfs-daemonset.json
wget https://raw.githubusercontent.com/heketi/heketi/v8.0.0/extras/kubernetes/heketi-bootstrap.json
wget https://raw.githubusercontent.com/heketi/heketi/v8.0.0/extras/kubernetes/heketi-deployment.json
wget https://raw.githubusercontent.com/heketi/heketi/v8.0.0/extras/kubernetes/heketi-service-account.json
wget https://raw.githubusercontent.com/heketi/heketi/v8.0.0/extras/kubernetes/heketi-start.sh
wget https://raw.githubusercontent.com/heketi/heketi/v8.0.0/extras/kubernetes/heketi.json
wget https://raw.githubusercontent.com/heketi/heketi/v8.0.0/extras/kubernetes/topology-sample.json

二、kubernetes集羣內集成部署glusterfs

2.1 修改glusterfs-daemonset.json 以滿足自定義需求

滿足條件1:使用kube-system命名空間
滿足條件2:指定在3臺master節點以daemonset模式啓動。因爲master節點默認是不參與調度的(當然可以配置參與調度),需要設置tolerations
滿足條件3:網絡模式採用hostNetwork

kubectl label node ejucsmaster-shqs-1 storagenode=glusterfs
kubectl label node ejucsmaster-shqs-2 storagenode=glusterfs
kubectl label node ejucsmaster-shqs-3 storagenode=glusterfs

kubectl get node --show-labels
	NAME                 STATUS   ROLES    AGE   VERSION   LABELS
	ejucsmaster-shqs-1   Ready    master   61d   v1.13.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ejucsmaster-shqs-1,node-role.kubernetes.io/master=,storagenode=glusterfs
	ejucsmaster-shqs-2   Ready    master   60d   v1.13.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ejucsmaster-shqs-2,node-role.kubernetes.io/master=,storagenode=glusterfs
	ejucsmaster-shqs-3   Ready    master   60d   v1.13.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ejucsmaster-shqs-3,node-role.kubernetes.io/master=,storagenode=glusterfs
	ejucsnode-shqs-1     Ready    node     60d   v1.13.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ejucsnode-shqs-1,node-role.kubernetes.io/node=
	ejucsnode-shqs-2     Ready    node     60d   v1.13.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ejucsnode-shqs-2,node-role.kubernetes.io/node=
	ejucsnode-shqs-3     Ready    node     60d   v1.13.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ejucsnode-shqs-3,node-role.kubernetes.io/node=
{
    "kind": "DaemonSet",
    "apiVersion": "extensions/v1beta1",
    "metadata": {
        "namespace": "kube-system",
        ...
    },
    "spec": {
        "template": {
            "metadata": {
             ...
            },
            "spec": {
                "tolerations": [{
                    "key": "node-role.kubernetes.io/master",
                    "operator": "Equal",
                    "value": "",
                    "effect": "NoSchedule"
                }],
                "nodeSelector": {
                    "storagenode" : "glusterfs"
                },
                "hostNetwork": true,

滿足條件4:glusterfs pod掛載volume默認列表缺少/run/udev 導致pvcreate出現問題(不知道別人是否遇到了),需要添加一個volume

...
"containers": [
                    {
                        "image": "gluster/gluster-centos:latest",
                        "imagePullPolicy": "Always",
                        "name": "glusterfs",
                        "volumeMounts": [
                            {
                                "name": "glusterfs-lvm",
                                "mountPath": "/run/lvm"
                            },
                            {
                                "name": "glusterfs-udev",
                                "mountPath": "/run/udev"
                            },
                            ...
                        ],

                "volumes": [
                    {
                        "name": "glusterfs-udev",
                        "hostPath": {
                            "path": "/run/udev"
                        }
                    },
                   ...
                    }
               ]

2.2 kubernetes集羣內部署glusterfs-server

kubectl apply -f glusterfs-daemonset.json 

kubectl get ds -n kube-system
	NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
	glusterfs                    3         3         3       3            3           storagenode=glusterfs             16m

kubectl get pods -n kube-system
NAME                                         READY   STATUS    RESTARTS   AGE
glusterfs-5dr6g                              1/1     Running   0          17m
glusterfs-772ng                              1/1     Running   0          17m
glusterfs-7vljr                              1/1     Running   0          17m

kubectl exec -it glusterfs-5dr6g  bash -n kube-system
[root@ejucsmaster-shqs-3 /]# ip add
	# 可以看到使用hostnetwork模式,這裏的網絡與主機一致。
	...
	6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
	    link/ether 14:18:77:63:60:a9 brd ff:ff:ff:ff:ff:ff
	    inet 10.99.12.203/24 brd 10.99.12.255 scope global bond0
	       valid_lft forever preferred_lft forever
	7: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
	    link/ether 14:18:77:63:60:aa brd ff:ff:ff:ff:ff:ff
	    inet 10.99.13.203/24 brd 10.99.13.255 scope global bond1
	       valid_lft forever preferred_lft forever

[root@ejucsmaster-shqs-3 /]# fdisk -l
	Disk /dev/sdb: 599.6 GB, 599550590976 bytes, 1170997248 sectors
	Disk /dev/sdc: 599.6 GB, 599550590976 bytes, 1170997248 sectors
	Disk /dev/sdd: 599.6 GB, 599550590976 bytes, 1170997248 sectors
	Disk /dev/sde: 599.6 GB, 599550590976 bytes, 1170997248 sectors

三、kubernetes集羣內部署heketi(server)

3.1 創建Heketi服務帳戶

  • 配置文件明確指定命名空間

    {
      "apiVersion": "v1",
      "kind": "ServiceAccount",
      "metadata": {
        "name": "heketi-service-account",
        "namespace": "kube-system"
      }
    }
    
  • 創建ServiceAccount

    kubectl apply -f heketi-service-account.json
    	serviceaccount/heketi-service-account created
    
    kubectl get serviceaccount  -n kube-system
    	NAME                                 SECRETS   AGE
    	heketi-service-account               1         28s
    
  • 服務賬戶授權
    給該服務帳戶的授權綁定相應的權限來控制gluster的pod,通過爲服務帳戶創建羣集角色綁定來完成此操作。

    # 此處授權的名稱空間爲kube-systemHeketi所能操作的glusterfs之Pod在此名稱空間內,否則此角色將無法訪問到glusterfs
    kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=kube-system:heketi-service-account
    

3.2 創建保存Heketi服務的配置的secret

從github上下載的heketi.json文件,默認參數是沒有開啓認證的。如果需要啓用認證可以修改該文檔。參考文檔:GlusterFS操作記錄(5) GlusterFS+Heketi配置(獨立部署),這裏採用默認配置。

  • 配置文件heketi.json中的glusterfs/executor的值kubernetes
  • Secret的名稱空間,必須與gluserfs Pod位於同一名稱空間kube-system
kubectl create secret generic heketi-config-secret --from-file=./heketi.json  -n kube-system
	secret/heketi-config-secret created

kubectl get secrets -n kube-system
	heketi-config-secret                             Opaque                                1      22m

3.3 部署heketi-bootstrap

3.3.1 修改heketi-bootstrap.json文件,確保命名空間是kube-system

需要修改的地方有2處,分別是資源Service,Deployment
···json

  "metadata": {
    "namespace": "kube-system",
    ...
    },

···

3.3.2 部署

kubectl apply -f heketi-bootstrap.json
	service/deploy-heketi created
	deployment.extensions/deploy-heketi created

kubectl get svc -n kube-system
	NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
	deploy-heketi             ClusterIP   10.109.97.157    <none>        8080/TCP          38s

kubectl get deployment -n kube-system
	NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
	deploy-heketi          1/1     1            1           64s

#驗證
curl http://10.109.97.157:8080/hello
	Hello from Heketi

#使用Heketi-cli命令行工具向Heketi提供要管理的GlusterFS集羣的信息,通過變量HEKETI_CLI_SERVER來找自已的服務端(在kubernetes集羣內,例如master節點)
export HEKETI_CLI_SERVER=http://10.109.97.157:8080 

# heketi-cli cluster list
	Clusters:

3.4 使用拓撲文件,通過heketi-cli配置glusterfs集羣

從github上下載的topology-sample.json需要修改爲滿足自己集羣的配置,主要修改如下內容:

  • manage爲相應節點的host那麼
  • storage 是glusterfs存儲卷使用的網絡IP地址(我這個與管理網絡不同,採用單獨網絡)
  • devices 列表按照實際情況配置

修改後的文件內容 cat topology-sample.json

{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "ejucsmaster-shqs-1"
              ],
              "storage": [
                "10.99.13.201"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdb",
            "/dev/sdc",
            "/dev/sdd",
            "/dev/sde"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "ejucsmaster-shqs-2"
              ],
              "storage": [
                "10.99.13.202"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdb",
            "/dev/sdc",
            "/dev/sdd",
            "/dev/sde"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "ejucsmaster-shqs-3"
              ],
              "storage": [
                "10.99.13.203"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdb",
            "/dev/sdc",
            "/dev/sdd",
            "/dev/sde"
          ]
        }
      ]
    }
  ]
}
heketi-cli topology load --json=topology-sample.json 
	Creating cluster ... ID: 6050fe71956fdbdfcef8743eb42f3200
	        Allowing file volumes on cluster.
	        Allowing block volumes on cluster.
	        Creating node ejucsmaster-shqs-1 ... ID: fbfd9284b8418e6c86c7a0e32e6f1d7c
	                Adding device /dev/sdb ... OK
	                Adding device /dev/sdc ... OK
	                Adding device /dev/sdd ... OK
	                Adding device /dev/sde ... OK
	        Creating node ejucsmaster-shqs-2 ... ID: 5a0f0526f958fe195c82a90f7bb519fc
	                Adding device /dev/sdb ... OK
	                Adding device /dev/sdc ... OK
	                Adding device /dev/sdd ... OK
	                Adding device /dev/sde ... OK
	        Creating node ejucsmaster-shqs-3 ... ID: 4ea6f899f0c5ea069df79b1b3aeefd6d
	                Adding device /dev/sdb ... OK
	                Adding device /dev/sdc ... OK
	                Adding device /dev/sdd ... OK
	                Adding device /dev/sde ... OK

heketi-cli topology info
	Cluster Id: 6050fe71956fdbdfcef8743eb42f3200
	
	    File:  true
	    Block: true
	
	    Volumes:
	
	    Nodes:
	
	        Node Id: 4ea6f899f0c5ea069df79b1b3aeefd6d
	        State: online
	        Cluster Id: 6050fe71956fdbdfcef8743eb42f3200
	        Zone: 1
	        Management Hostnames: ejucsmaster-shqs-3
	        Storage Hostnames: 10.99.13.203
	        Devices:
	                Id:7fce4e421b0e1ad73e4e99c1089329b4   Name:/dev/sdb            State:online    Size (GiB):558     Used (GiB):0       Free (GiB):558     
	                        Bricks:
	                Id:d05f35a269ce9069da26bd43d6ff664a   Name:/dev/sdc            State:online    Size (GiB):558     Used (GiB):0       Free (GiB):558     
	                        Bricks:
	                Id:da9d6d13c14e4d7c29ba02c27bce2d4b   Name:/dev/sde            State:online    Size (GiB):558     Used (GiB):0       Free (GiB):558     
	                        Bricks:
	                Id:ec257a94685f9151d11f4e15a7f76d25   Name:/dev/sdd            State:online    Size (GiB):558     Used (GiB):0       Free (GiB):558     
	                        Bricks:
	
	        Node Id: 5a0f0526f958fe195c82a90f7bb519fc
	        State: online
	        Cluster Id: 6050fe71956fdbdfcef8743eb42f3200
	        Zone: 1
	        Management Hostnames: ejucsmaster-shqs-2
	        Storage Hostnames: 10.99.13.202
	        Devices:
	                Id:5f70488181b2ad19d35361da086d2987   Name:/dev/sde            State:online    Size (GiB):558     Used (GiB):0       Free (GiB):558     
	                        Bricks:
	                Id:80dfed0036c35e2fd0cc97b32d47a114   Name:/dev/sdb            State:online    Size (GiB):558     Used (GiB):0       Free (GiB):558     
	                        Bricks:
	                Id:9ed6993402976fde2d0809af568f4766   Name:/dev/sdc            State:online    Size (GiB):558     Used (GiB):0       Free (GiB):558     
	                        Bricks:
	                Id:d8eec2ad379ecb8a494a5f85e5e73b0f   Name:/dev/sdd            State:online    Size (GiB):558     Used (GiB):0       Free (GiB):558     
	                        Bricks:
	
	        Node Id: fbfd9284b8418e6c86c7a0e32e6f1d7c
	        State: online
	        Cluster Id: 6050fe71956fdbdfcef8743eb42f3200
	        Zone: 1
	        Management Hostnames: ejucsmaster-shqs-1
	        Storage Hostnames: 10.99.13.201
	        Devices:
	                Id:2e772b63815f3997d4ff875dfa650dc3   Name:/dev/sde            State:online    Size (GiB):558     Used (GiB):0       Free (GiB):558     
	                        Bricks:
	                Id:49f4f0cb47b0d8b6b813c6443ba9a6a3   Name:/dev/sdd            State:online    Size (GiB):558     Used (GiB):0       Free (GiB):558     
	                        Bricks:
	                Id:b1e297a4b97f7c3f59a1fe2af323ca87   Name:/dev/sdc            State:online    Size (GiB):558     Used (GiB):0       Free (GiB):558     
	                        Bricks:
	                Id:f15aab69406e5585c8acfb360ee08504   Name:/dev/sdb            State:online    Size (GiB):558     Used (GiB):0       Free (GiB):558     
	                        Bricks:

3.5 驗證(1)

其實到這裏glusterfs+heketi服務已經可以使用了,但是heketi數據庫是臨時存儲的,隨着pod消亡。下面先驗證,之後配置使用glusterfs存儲卷存儲heketi數據庫。

  • 創建storageclass
    因爲沒有啓用認證,所有認證相關的參數註釋掉了
    cat glusterfs-storageclass.yaml
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: glusterfs-storage-class-2
    provisioner: kubernetes.io/glusterfs
    allowVolumeExpansion: true
    reclaimPolicy: Delete
    parameters:
      resturl: "http://10.109.97.157:8080"
      clusterid: "6050fe71956fdbdfcef8743eb42f3200"
      volumetype: "replicate:3"
      gidMax: "50000"
      gidMin: "40000"
      #restauthenabled: "true"
      #restuser: "admin"
      #restuserkey: "admin_secret"
      #secretNamespace: "kube-system"
      #secretName: "heketi-secret"
    
    kubectl apply -f glusterfs-storageclass.yaml 
    	storageclass.storage.k8s.io/glusterfs-storage-class-2 created
    
    kubectl get sc 
    	NAME                        PROVISIONER               AGE
    	glusterfs-storage-class-2   kubernetes.io/glusterfs   31s
    
  • 創建pvc
    cat glusterfs-pvc.yaml 
    
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: gluster1
      namespace: default
    spec:
      storageClassName: glusterfs-storage-class-2
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 2Gi
    
    kubectl apply -f glusterfs-pvc.yaml
    	persistentvolumeclaim/gluster1 created
    
    kubectl get pvc
    	NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                AGE
    	gluster1   Bound    pvc-b52ffe6f-2eb5-11e9-8a0e-1418776411a1   2Gi        RWX            glusterfs-storage-class-2   23s
    
    kubectl get pv
    	NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS                REASON   AGE
    	pvc-b52ffe6f-2eb5-11e9-8a0e-1418776411a1   2Gi        RWX            Delete           Bound    default/gluster1   glusterfs-storage-class-2            31s
    
  • 創建一個pod,通過該pvc使用存儲卷
cat nginx.yaml 

kind: Pod
apiVersion: v1
metadata:
  name: nginx-glusterfs
  labels:
    name: nginx-glusterfs
spec:
  containers:
  - name: nginx-glusterfs
    image: nginx
    ports:
    - name: web
      containerPort: 80
    volumeMounts:
    - name: gluster-vol1
      mountPath: /usr/share/nginx/html
  volumes:
  - name: gluster-vol1
    persistentVolumeClaim:
      claimName: gluster1
kubectl apply -f nginx.yaml 
	pod/nginx-glusterfs created

kubectl get pods
	NAME              READY   STATUS    RESTARTS   AGE     IP             NODE               NOMINATED NODE   READINESS GATES
	nginx-glusterfs   1/1     Running   0          2m28s   192.168.4.16   ejucsnode-shqs-2   <none>           <none>

kubectl exec -it nginx-glusterfs bash
root@nginx-glusterfs:/# df -h
	Filesystem                                         Size  Used Avail Use% Mounted on
	overlay                                            526G  7.4G  519G   2% /
	tmpfs                                               64M     0   64M   0% /dev
	tmpfs                                               63G     0   63G   0% /sys/fs/cgroup
	/dev/sda5                                          526G  7.4G  519G   2% /etc/hosts
	shm                                                 64M     0   64M   0% /dev/shm
	10.99.13.201:vol_43db1521d836d1a1720f3809e0c5dd0a  2.0G   53M  2.0G   3% /usr/share/nginx/html
	tmpfs                                               63G   12K   63G   1% /run/secrets/kubernetes.io/serviceaccount
	tmpfs                                               63G     0   63G   0% /proc/acpi
	tmpfs                                               63G     0   63G   0% /proc/scsi
	tmpfs                                               63G     0   63G   0% /sys/firmware

root@nginx-glusterfs:/# echo "hello GlusterFS" >> /usr/share/nginx/html/index.html
root@nginx-glusterfs:/# exit

curl http://192.168.4.16 
hello GlusterFS

kubectl exec -it glusterfs-7vljr bash -n kube-system
	[root@ejucsmaster-shqs-2 /]# ls /var/lib/heketi/mounts/vg_d8eec2ad379ecb8a494a5f85e5e73b0f/brick_54ea5fa3dda9c9634578ffb37399816d/brick/ 
	index.html
	[root@ejucsmaster-shqs-2 /]# cat /var/lib/heketi/mounts/vg_d8eec2ad379ecb8a494a5f85e5e73b0f/brick_54ea5fa3dda9c9634578ffb37399816d/brick/index.html 
	hello GlusterFS
	[root@ejucsmaster-shqs-2 /]# 

刪除測試數據

kubectl delete -f nginx.yaml
kubectl delete -f glusterfs-pvc.yaml
kubectl delete -f glusterfs-storageclass.yaml

3.6 創建heketi存儲卷

heketi-cli setup-openshift-heketi-storage
Saving heketi-storage.json

# 指定在kube-system命名空間操作
kubectl apply -f heketi-storage.json -n kube-system
	secret/heketi-storage-secret created
	endpoints/heketi-storage-endpoints created
	service/heketi-storage-endpoints created
	job.batch/heketi-storage-copy-job created

# 等下
kubectl get jobs -n kube-system
NAME                      COMPLETIONS   DURATION   AGE
heketi-storage-copy-job   1/1           2s         78s

作業完成後,刪除bootstrap Heketi實例相關的組件

kubectl delete all,service,jobs,deployment,secret --selector="deploy-heketi" -n kube-system
	pod "deploy-heketi-77745f8c4-24wl5" deleted
	service "deploy-heketi" deleted
	deployment.apps "deploy-heketi" deleted
	replicaset.apps "deploy-heketi-77745f8c4" deleted
	job.batch "heketi-storage-copy-job" deleted
	secret "heketi-storage-secret" deleted

3.7 kubernetes集羣內創建持久使用的heketi服務

kubectl apply -f heketi-deployment.json -n kube-system
	secret/heketi-db-backup created
	service/heketi created
	deployment.extensions/heketi created

kubectl get deploy -n  kube-system
	NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
	heketi                 1/1     1            1           28s


kubectl get svc -n kube-system
	NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
	heketi                     ClusterIP   10.103.10.202    <none>        8080/TCP          3m33s

export HEKETI_CLI_SERVER=http://10.103.10.202:8080
heketi-cli volume list
Id:60befae4b18d9344b6039d1899cdf440    Cluster:

 - [ ] 6050fe71956fdbdfcef8743eb42f3200

Name:heketidbstorage

heketi 數據庫使用GlusterFS卷,heketi pod重新啓動時都不會重置。

3.8 驗證(2)

與3.5 驗證(1)類似,創建storageclass,pvc,應用存儲卷pod

kubectl get sc
kubectl get pvc
kubectl get pv
kubectl get pods -o wide
	NAME              READY   STATUS    RESTARTS   AGE     IP             NODE               NOMINATED NODE   READINESS GATES
	nginx-glusterfs   1/1     Running   0          2m25s   192.168.4.18   ejucsnode-shqs-2   <none>           <none>
kubectl exec -it nginx-glusterfs  bash
	echo "hello Glusterfs" >> /usr/share/nginx/html/index.html

curl 192.168.4.18
	hello Glusterfs
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章