Kubernetes 持久化存儲之GlusterFS

GlusterFS是一個開源的分佈式文件,具有強大的橫向擴展能力,可支持數PB存儲容量和數千客戶端,通過網絡互連成一個並行的網絡文件系統。具有擴展性、高性能、高可用性等特點。

前提:必須要在實驗環境中部署了Gluster FS集羣,文中創建了名爲:gv0的存儲卷

1.創建endpoint,文件名爲glusterfs_ep.yaml

$ vi glusterfs_ep.yaml
apiVersion: v1
kind: Endpoints
metadata:
  name: glusterfs
  namespace: default
subsets:
# 添加GlusterFS各個集羣的IP地址
- addresses:
  - ip: 10.0.0.41
  - ip: 10.0.0.42
  ports:
  # 添加GlusterFS端口號
  - port: 49152
    protocol: TCP

執行yaml

$ kubectl create -f  glusterfs_ep.yaml
endpoints/glusterfs created

// 查看創建好的endpoints
[root@k8s-master01 ~]# kubectl get ep
NAME                 ENDPOINTS                                    AGE
glusterfs            10.0.0.41:49152,10.0.0.42:49152       15s

2.爲該endpoint創建svc
Endpoint是GlusterFS的集羣節點,那麼需要訪問到這些節點,就需要創建svc

$ vi glusterfs_svc.yaml
apiVersion: v1
kind: Service
metadata:
  # 該名稱必須要和endpoint裏的name一致
  name: glusterfs
spec:
  ports:
  - port: 49152
    protocol: TCP
    targetPort: 49152
  sessionAffinity: None
  type: ClusterIP

執行yaml

$ kubectl create -f  glusterfs_svc.yaml
service/glusterfs created

$ kubectl get svc
NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
glusterfs            ClusterIP   10.1.104.145   <none>        49152/TCP   20s

3.爲Glusterfs創建pv

$ vi glusterfs_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gluster
  labels:
    type: glusterfs
spec:
  capacity:
      # 指定該pv的容量
    storage: 50Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    # 指定glusterfs的endpoint名稱
    endpoints: "glusterfs"
    # path名稱是在glusterfs裏創建的卷
    # 可登錄到glusterfs集羣執行"gluster volume list"命令來查看已創建的卷
    path: "gv0"
    readOnly: false

執行yaml

$ kubectl create -f  glusterfs_pv.yaml
persistentvolume/gluster created

$ kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
gluster   50Gi       RWX            Retain           Available                                   10s

4.爲Glusterfs創建pvc

$ vi glusterfs_pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  # 名稱必須和指定的pv一致
  name: gluster
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
          # 指定該pvc使用pv的容量空間
      storage: 20Gi

執行yaml

$ kubectl  create -f glusterfs_pvc.yaml
persistentvolumeclaim/gluster created

$ kubectl get pvc
NAME      STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
gluster   Bound    gluster   50Gi       RWX                           83s

5.創建nginx pod並掛載到cluster的pvc nginx_pod.yaml

$ vim nginx-demo.yaml
---
# Pod
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: web
    env: test
spec:
  containers:
    - name: nginx
      image: nginx:1.13
      ports:
        - containerPort: 80
      volumeMounts:
        - name: data-gv0
          mountPath: /usr/share/nginx/html
  volumes:
  - name: data-gv0
    persistentVolumeClaim:
          # 綁定指定的pv
      claimName: gluster

執行yaml

$ kubectl  create -f nginx-demo.yaml
pod/nginx created

[root@k8s-master01 ~]# kubectl get pods  | grep "nginx"
nginx  1/1     Running     0          2m     10.244.1.222   k8s-node01   <none>           <none>

在任意客戶端掛載/mntglusterfs目錄,然後創建一個index.html文件

$ mount -t glusterfs k8s-store01:/gv0 /mnt/
$ cd /mnt && echo "this nginx store used gluterfs cluster" >index.html

在master節點上通過curl訪問pod

$ curl  10.244.1.220/index.html
this nginx store used gluterfs cluster
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章