k8s使用glusterfs實現動態持久化存儲

簡介

本文章介紹如何使用glusterfs爲k8s提供動態申請pv的功能。glusterfs提供底層存儲功能,heketi爲glusterfs提供restful風格的api,方便管理glusterfs。支持k8s的pv的3種訪問模式ReadWriteOnce,ReadOnlyMany ,ReadWriteMany

訪問模式只是能力描述,並不是強制執行的,對於沒有按pvc聲明的方式使用pv,存儲提供者應該負責訪問時的運行錯誤。例如如果設置pvc的訪問模式爲ReadOnlyMany ,pod掛載後依然可寫,如果需要真正的不可寫,申請pvc是需要指定 readOnly: true 參數

安裝

實驗用的Vagrantfile

# -*- mode: ruby -*-
# vi: set ft=ruby :

ENV["LC_ALL"] = "en_US.UTF-8"

Vagrant.configure("2") do |config|
    (1..3).each do |i|
      config.vm.define "lab#{i}" do |node|
        node.vm.box = "centos-7.4-docker-17"
        node.ssh.insert_key = false
        node.vm.hostname = "lab#{i}"
        node.vm.network "private_network", ip: "11.11.11.11#{i}"
        node.vm.provision "shell",
          inline: "echo hello from node #{i}"
        node.vm.provider "virtualbox" do |v|
          v.cpus = 2
          v.customize ["modifyvm", :id, "--name", "lab#{i}", "--memory", "3096"]
          file_to_disk = "lab#{i}_vdb.vdi"
          unless File.exist?(file_to_disk)
            # 50GB
            v.customize ['createhd', '--filename', file_to_disk, '--size', 50 * 1024]
          end
          v.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', file_to_disk]
        end
      end
    end
end
複製代碼

環境配置說明

# 安裝 glusterfs 每節點需要提前加載 dm_thin_pool 模塊
modprobe dm_thin_pool

# 配置開啓自加載
cat >/etc/modules-load.d/glusterfs.conf<<EOF
dm_thin_pool
EOF

# 安裝 glusterfs-fuse
yum install -y glusterfs-fuse
複製代碼

安裝glusterfs與heketi

# 安裝 heketi client
# https://github.com/heketi/heketi/releases
# 去github下載相關的版本
wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz
tar xf heketi-client-v7.0.0.linux.amd64.tar.gz
cp heketi-client/bin/heketi-cli /usr/local/bin

# 查看版本
heketi-cli -v

# 如下部署步驟都在如下目錄執行
cd heketi-client/share/heketi/kubernetes

# 在k8s中部署 glusterfs
kubectl create -f glusterfs-daemonset.json

# 查看 node 節點
kubectl get nodes

# 給提供存儲 node 節點打 label
kubectl label node lab1 lab2 lab3 storagenode=glusterfs

# 查看 glusterfs 狀態
kubectl get pods -o wide

# 部署 heketi server 
# 配置 heketi server 的權限
kubectl create -f heketi-service-account.json
kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account

# 創建 cofig secret
kubectl create secret generic heketi-config-secret --from-file=./heketi.json

# 初始化部署
kubectl create -f heketi-bootstrap.json

# 查看 heketi bootstrap 狀態
kubectl get pods -o wide
kubectl get svc

# 配置端口轉發 heketi server
HEKETI_BOOTSTRAP_POD=$(kubectl get pods | grep deploy-heketi | awk '{print $1}')
kubectl port-forward $HEKETI_BOOTSTRAP_POD 58080:8080

# 測試訪問
# 另起一終端
curl http://localhost:58080/hello

# 配置 glusterfs
# hostnames/manage 字段裏必須和 kubectl get node 一致
# hostnames/storage 指定存儲網絡 ip 本次實驗使用與k8s集羣同一個ip
cat >topology.json<<EOF
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "lab1"
              ],
              "storage": [
                "11.11.11.111"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "lab2"
              ],
              "storage": [
                "11.11.11.112"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "lab3"
              ],
              "storage": [
                "11.11.11.113"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            }
          ]
        }
      ]
    }
  ]
}
EOF
export HEKETI_CLI_SERVER=http://localhost:58080
heketi-cli topology load --json=topology.json

# 使用 Heketi 創建一個用於存儲 Heketi 數據庫的 volume
heketi-cli setup-openshift-heketi-storage
kubectl create -f heketi-storage.json

# 查看狀態
# 等所有job完成 即狀態爲 Completed
# 才能進行如下的步驟
kubectl get pods
kubectl get job

# 刪除部署時產生的相關資源
kubectl delete all,service,jobs,deployment,secret --selector="deploy-heketi"

# 部署 heketi server
kubectl create -f heketi-deployment.json

# 查看 heketi server 狀態
kubectl get pods -o wide
kubectl get svc

# 查看 heketi 狀態信息
# 配置端口轉發 heketi server
HEKETI_BOOTSTRAP_POD=$(kubectl get pods | grep heketi | awk '{print $1}')
kubectl port-forward $HEKETI_BOOTSTRAP_POD 58080:8080
export HEKETI_CLI_SERVER=http://localhost:58080
heketi-cli cluster list
heketi-cli volume list
複製代碼

測試

# 創建 StorageClass
# 由於沒有開啓認證
# restuser restuserkey 可以隨意寫
HEKETI_SERVER=$(kubectl get svc | grep heketi | head -1 | awk '{print $3}')
echo $HEKETI_SERVER
cat >storageclass-glusterfs.yaml<<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gluster-heketi
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://$HEKETI_SERVER:8080"
  restauthenabled: "false"
  restuser: "will"
  restuserkey: "will"
  gidMin: "40000"
  gidMax: "50000"
  volumetype: "replicate:3"
EOF
kubectl create -f storageclass-glusterfs.yaml

# 查看
kubectl get sc

# 創建pvc測試
cat >gluster-pvc-test.yaml<<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: gluster1
 annotations:
   volume.beta.kubernetes.io/storage-class: gluster-heketi
spec:
 accessModes:
  - ReadWriteOnce
 resources:
   requests:
     storage: 5Gi
EOF
kubectl apply -f gluster-pvc-test.yaml
 
# 查看
kubectl get pvc
kubectl get pv
 
# 創建 nginx pod 掛載測試
cat >nginx-pod.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod1
  labels:
    name: nginx-pod1
spec:
  containers:
  - name: nginx-pod1
    image: nginx:alpine
    ports:
    - name: web
      containerPort: 80
    volumeMounts:
    - name: gluster-vol1
      mountPath: /usr/share/nginx/html
  volumes:
  - name: gluster-vol1
    persistentVolumeClaim:
      claimName: gluster1
EOF
kubectl apply -f nginx-pod.yaml
 
# 查看
kubectl get pods -o wide
 
# 修改文件內容
kubectl exec -ti nginx-pod1 -- /bin/sh -c 'echo Hello World from GlusterFS!!! > /usr/share/nginx/html/index.html'
 
# 訪問測試
POD_ID=$(kubectl get pods -o wide | grep nginx-pod1 | awk '{print $(NF-1)}')
curl http://$POD_ID
 
# node 節點查看文件內容
GLUSTERFS_POD=$(kubectl get pod | grep glusterfs | head -1 | awk '{print $1}')
kubectl exec -ti $GLUSTERFS_POD /bin/sh
mount | grep heketi
cat /var/lib/heketi/mounts/vg_56033aa8a9131e84faa61a6f4774d8c3/brick_1ac5f3a0730457cf3fcec6d881e132a2/brick/index.html
複製代碼

本文轉自掘金-k8s使用glusterfs實現動態持久化存儲
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章