CentOS7下配置GlusterFS供Kubernetes使用

CentOS7下配置GlusterFS供Kubernetes使用

1. 環境說明

系統:CentOS7,/data爲非系統分區掛載目錄
docker:1.13.1
kubernetes:1.11.1
glusterfs:4.1.2

2. GlusterFS部署

2個節點,192.168.105.97、192.168.105.98

使用yum安裝

yum install centos-release-gluster
yum -y install glusterfs glusterfs-fuse glusterfs-server

CentOS-Gluster-4.1.repo

啓動及設置開機啓動

systemctl start glusterd 
systemctl enable glusterd

GlusterFS通過24007端口相互通信。防火牆需要開放端口。

/etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

# k8s 
192.168.105.92 lab1  # master1
192.168.105.93 lab2  # master2
192.168.105.94 lab3  # master3
192.168.105.95 lab4  # node4
192.168.105.96 lab5  # node5

# glusterfs
192.168.105.98 glu1  # glusterfs1
192.168.105.97 harbor1  # harbor1

在主機glu1上執行

#添加節點到集羣執行操作的本機不需要probe本機
gluster peer probe harbor1

查看集羣狀態(節點間相互看到對方信息)

gluster peer status
Number of Peers: 1

Hostname: harbor1
Uuid: ebedc57b-7c71-4ecb-b92e-a7529b2fee31
State: Peer in Cluster (Connected)

GlusterFS 幾種volume模式說明:
鏈接中比較直觀:https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/

  1. 默認模式,既DHT, 也叫分佈卷: 將文件已hash算法隨機分佈到 一臺服務器節點中存儲。
    命令格式:gluster volume create test-volume server1:/exp1 server2:/exp2
  2. 複製模式,既AFR, 創建volume 時帶 replica x 數量: 將文件複製到 replica x 個節點中,現在已經推薦3節點仲裁者複製模式,因爲2節點可能產生腦裂。
    命令格式:gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
    gluster volume create test-volume replica 3 arbiter 1 transport tcp server1:/exp1 server2:/exp2 server3:/exp3
  3. 分佈式複製模式,至少4節點。
    命令格式:gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
  4. 分散模式,最少需要3節點
    命令格式:gluster volume create test-volume disperse 3 server{1..3}:/bricks/test-volume
  5. 分佈式分散模式,創建一個分佈式分散體積,分散關鍵字和<數量>是強制性的,指定的磚塊在命令行中的數量必須是分散數的倍數
    命令格式:gluster volume create <volname> disperse 3 server1:/brick{1..6}
gluster volume create k8s_volume 192.168.105.98:/data/glusterfs/dev/k8s_volume
gluster volume start k8s_volume
gluster volume status
gluster volume info

列一些Glusterfs調優:

# 開啓 指定 volume 的配額
gluster volume quota k8s-volume enable
# 限制 指定 volume 的配額
gluster volume quota k8s-volume limit-usage / 1TB
# 設置 cache 大小, 默認32MB
gluster volume set k8s-volume performance.cache-size 4GB
# 設置 io 線程, 太大會導致進程崩潰
gluster volume set k8s-volume performance.io-thread-count 16
# 設置 網絡檢測時間, 默認42s
gluster volume set k8s-volume network.ping-timeout 10
# 設置 寫緩衝區的大小, 默認1M
gluster volume set k8s-volume performance.write-behind-window-size 1024MB

3. 客戶端使用GlusterFS

3.1 物理機上使用GlusterFS的volume

yum install -y centos-release-gluster
yum install -y glusterfs glusterfs-fuse fuse fuse-libs openib libibverbs
mkdir -p /tmp/test
mount -t glusterfs 192.168.105.98:k8s_volume/tmp/test  # 和NFS掛載用法類似

3.2 Kubernetes使用GlusterFS

以下操作在kubernetes master節點操作

3.2.1 創建GlusterFS端點定義

vim /etc/kubernetes/glusterfs/glusterfs-endpoints.json

{
  "kind": "Endpoints",
  "apiVersion": "v1",
  "metadata": {
    "name": "glusterfs-cluster"
  },
  "subsets": [
    {
      "addresses": [
        {
          "ip": "192.168.105.98"
        }
      ],
      "ports": [
        {
          "port": 1
        }
      ]
    },
    {
      "addresses": [
        {
          "ip": "192.168.105.97"
        }
      ],
      "ports": [
        {
          "port": 1
        }
      ]
    }
  ]
}

注意:
該subsets字段應填充GlusterFS集羣中節點的地址。可以在port字段中提供任何有效值(從1到65535)。

kubectl apply -f /etc/kubernetes/glusterfs/glusterfs-endpoints.json
kubectl get endpoints
NAME                ENDPOINTS                                                     AGE
glusterfs-cluster   192.168.105.97:1,192.168.105.98:1  

3.2.2 配置 service

我們還需要爲這些端點創建服務,以便它們能夠持久存在。我們將在沒有選擇器的情況下添加此服務,以告知Kubernetes我們想要手動添加其端點

vim glusterfs-service.json

{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
    "name": "glusterfs-cluster"
  },
  "spec": {
    "ports": [
      {"port": 1}
    ]
  }
}
kubectl apply -f glusterfs-service.json 

3.3.3 配置PersistentVolume

創建glusterfs-pv.yaml文件,指定storage容量和讀寫屬性

vim glusterfs-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv001
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: "glusterfs-cluster"
    path: "k8s_volume"
    readOnly: false
kubectl apply -f glusterfs-pv.yaml 
kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
pv001     10Gi       RWX            Retain           Available                                      21s

3.3.4 配置PersistentVolumeClaim

創建glusterfs-pvc.yaml文件,指定請求資源大小

vim glusterfs-pvc.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv001
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: "glusterfs-cluster"
    path: "k8s_volume"
    readOnly: false
kubectl apply -f glusterfs-pvc.yaml
kubectl get pvc
NAME      STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc001    Bound     zk001     10Gi       RWX                           44s

3.3.5 部署應用掛載pvc

以創建nginx,把pvc掛載到容器內的/usr/share/nginx/html文件夾爲例:

vim glusterfs-nginx-deployment.yaml

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: nginx-dm
  namespace: default
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
          - containerPort: 80
        volumeMounts:
          - name: storage001
            mountPath: "/usr/share/nginx/html"
      volumes:
        - name: storage001
          persistentVolumeClaim:
            claimName: pvc001
kubectl create -f nginx_deployment.yaml
# 查看部署是否成功
kubectl get pod|grep nginx-dm
nginx-dm-c8c895d96-hfdsz            1/1       Running   0          36s
nginx-dm-c8c895d96-jrfbx            1/1       Running   0          36s

驗證結果:

# 查看掛載
[root@lab1 glusterfs]# kubectl exec -it nginx-dm-c8c895d96-5h649 -- df -h|grep nginx
192.168.105.97:k8s_volume 1000G   11G  990G   2% /usr/share/nginx/html
[root@lab1 glusterfs]# kubectl exec -it nginx-dm-c8c895d96-zf6ch -- df -h|grep nginx
192.168.105.97:k8s_volume 1000G   11G  990G   2% /usr/share/nginx/html
[root@lab1 glusterfs]# kubectl exec -it nginx-dm-c8c895d96-5h649 -- touch /usr/share/nginx/html/ygqygq2      
[root@lab1 glusterfs]# kubectl exec -it nginx-dm-c8c895d96-5h649 -- ls -lt /usr/share/nginx/html/
total 1
-rw-r--r--. 1 root root 4 Aug 13 09:43 ygqygq2
-rw-r--r--. 1 root root 5 Aug 13 09:34 ygqygq2.txt
[root@lab1 glusterfs]# kubectl exec -it nginx-dm-c8c895d96-zf6ch -- ls -lt /usr/share/nginx/html/
total 1
-rw-r--r--. 1 root root 4 Aug 13 09:43 ygqygq2
-rw-r--r--. 1 root root 5 Aug 13 09:34 ygqygq2.txt

至此部署完成。

4. 小結

此文GlusterFS是安裝在物理系統下,而非kubernetes中,所有需要手工維護,下次介紹在kubernetes中安裝使用gluster。GlusterFS的volume模式根據業務靈活應用。需要注意的是,如果使用分佈卷,pod中的掛載目錄文件可能存在卷的任一節點中,可能並非直接df -h看到的那個節點中。

參數資料:
[1] https://kubernetes.io/docs/concepts/storage/persistent-volumes/
[2] https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
[3] https://www.kubernetes.org.cn/4069.html
[4] https://www.gluster.org/
[5] https://blog.csdn.net/hxpjava1/article/details/79817078
[6] https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/
[7] https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Clients/
[8] https://github.com/kubernetes/examples/blob/master/staging/volumes/glusterfs/README.md

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章