Kubernetes實錄(8) Kubernets使用glusterfs提供持久存儲PV(手動模式)

Kubernetes實錄系列記錄文檔完整目錄參考: Kubernetes實錄-目錄

相關記錄鏈接地址 :

一、準備工作

1. GlusterFS環境信息

主機 ip地址 角色 備註
gluster-server-1 10.99.7.11 gluster-server
gluster-server-2 10.99.7.12 gluster-server
gluster-server-2 10.99.7.12 gluster-server
# gluster pool list
UUID                                    Hostname        State
6f32f6d4-9cd7-4b40-b7b6-100054b187f7    10.99.7.12      Connected 
5669dbef-3f71-4e3c-8fb1-9c6e11c0c434    10.99.7.13      Connected 
7e77b653-263c-4a25-aa29-a91123b27bf4    localhost       Connected 

2. 所有Kubernetes節點安裝glusterfs-client

yum install centos-release-gluster -y
yum install glusterfs glusterfs-fuse socat -y
# rpm -aq |grep glusterfs
	glusterfs-libs-5.2-1.el7.x86_64
	glusterfs-5.2-1.el7.x86_64
	glusterfs-fuse-5.2-1.el7.x86_64
	glusterfs-client-xlators-5.2-1.el7.x86_64

3. 所有Kubernetes節點加載dm_thin_pool模塊

# 配置模塊開機自加載
cat <<EOF >  /etc/sysconfig/modules/custom.modules
modprobe dm_thin_pool
EOF
chmod +x /etc/sysconfig/modules/custom.modules
source /etc/sysconfig/modules/custom.modules
# lsmod |grep dm_thin_pool
	dm_thin_pool           66298  0 
	dm_persistent_data     75269  1 dm_thin_pool
	dm_bio_prison          18209  1 dm_thin_pool
	dm_mod                123941  2 dm_bufio,dm_thin_pool

4. 創建一個測試使用的glusterfs存儲卷

#[gluster server node]
gluster volume create container-volume replica 3 10.99.7.11:/data/brick1/volume1 10.99.7.12:/data/brick1/volume1 10.99.7.13:/data/brick1/volume1 force

gluster volume start container-volume

# gluster volume status container-volume
	Status of volume: container-volume
	Gluster process                             TCP Port  RDMA Port  Online  Pid
	------------------------------------------------------------------------------
	Brick 10.99.7.11:/data/brick1/volume1       49152     0          Y       5556 
	Brick 10.99.7.12:/data/brick1/volume1       49152     0          Y       4684 
	Brick 10.99.7.13:/data/brick1/volume1       49152     0          Y       4562 
	Self-heal Daemon on localhost               N/A       N/A        Y       5579 
	Self-heal Daemon on 10.99.7.12              N/A       N/A        Y       4707 
	Self-heal Daemon on 10.99.7.13              N/A       N/A        Y       4585 

# gluster volume info container-volume
	Volume Name: container-volume
	Type: Replicate
	Volume ID: 7f6eb3e0-2a40-41dc-b6fb-4ca493c676dd
	Status: Started
	Snapshot Count: 0
	Number of Bricks: 1 x 3 = 3
	Transport-type: tcp
	Bricks:
	Brick1: 10.99.7.11:/data/brick1/volume1
	Brick2: 10.99.7.12:/data/brick1/volume1
	Brick3: 10.99.7.13:/data/brick1/volume1
	Options Reconfigured:
	transport.address-family: inet
	nfs.disable: on
	performance.client-io-threads: off

# 如下調優或者配額可以根據實際需要進行配置或者默認不配置
gluster volume quota container-volume enable
gluster volume quota container-volume limit-usage / 5GB
gluster volume set container-volume performance.cache-size 64MB
gluster volume set container-volume performance.io-thread-count 16
gluster volume set container-volume network.ping-timeout 10
gluster volume set container-volume performance.write-behind-window-size 128MB

二、Kubernetes配置使用GlusterFS存儲卷(手動模式)

系統管理員負責創建endpoint,service,pv。 其中endpoint和service可以複用;應用部署人員配置PVC以及部署應用使用持久存儲卷

  • 使用方式一(用於測試):直接創建pod指定volume(endpoint),(不使用pv,只要配置了endpoints就可以)
  • 使用方式二(用於測試/或者產線):通過PVC-PV方式使用glusterfs存儲卷

1. 配置endpoints

操作人員: 系統管理員

cat glusterfs-endpoints.yaml 

apiVersion: v1
kind: Endpoints
metadata:
  name: glusterfs-cluster
subsets:
- addresses:
  - ip: 10.99.7.11
  - ip: 10.99.7.12
  - ip: 10.99.7.13
  ports:
  - port: 1
    protocol: TCP

備註:可以在port字段中提供任何有效值(1~65535),實際沒有用到這個值

kubectl apply -f glusterfs-endpoints.yaml 
kubectl get endpoints
	NAME                ENDPOINTS                                               AGE
	glusterfs-cluster   10.99.7.11:1,10.99.7.12:1,10.99.7.13:1                  71m

2. 配置service

操作人員: 系統管理員

cat glusterfs-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: glusterfs-cluster
spec:
  ports:
  - port: 1
kubectl apply -f glusterfs-service.yaml
# kubectl get svc glusterfs-cluster
	NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
	glusterfs-cluster   ClusterIP   10.101.122.95   <none>        1/TCP     26m

3. 配置PV (persistentVolume)

操作人員: 系統管理員

cat glusterfs-pv.yaml 

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gluster-volume1
  labels:
    name: gluster-volume1
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: "glusterfs-cluster"
    path: "container-volume"
    readOnly: false
kubectl apply -f glusterfs-pv.yaml 

# kubectl get pv 
	NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
	gluster-volume1   1Gi        RWX            Retain           Available                                   97s

三、Kubernetes配置使用GlusterFS存儲卷示例(手動模式)

Kubernetes手動模式使用GlusterFS存儲卷可以有2種方式:

  • 使用方式一(用於測試):直接創建pod指定volume(endpoint),(不使用service,pv,pvc等,這樣系統管理員只要配置了endpoints就可以)
  • 使用方式二(用於測試/或者產線):通過PVC-PV方式使用glusterfs存儲卷

kubernetes 自動模式用到storageclass自動配置(要做的工作一樣不少,只是由系統自動給我們配置了),參考如下:
Kubernetes初體驗(9) Kubernetes使用glusterfs+Heketi提供持久存儲PV(自動模式)

1. 使用方式一(用於測試):直接創建pod指定volume(endpoint) .

測試樣例來自github,參考,github上的文件使用的是json,我這裏改爲了yaml格式
操作人員: 應用部署人員

cat demo_mode1_nginx.yaml
kind: Pod
apiVersion: v1
metadata:
  name: glusterfs
spec:
  containers:
  - name: glusterfs
    image: nginx
    volumeMounts:
    - name: glusterfsvol
      mountPath: "/mnt/glusterfs"
  volumes:
  - name: glusterfsvol
    glusterfs:
      endpoints: "glusterfs-cluster"
      path: "container-volume"
      readOnly: false
kubectl apply -f demo_mode1_nginx.yaml
kubectl get pods
		NAME        READY   STATUS    RESTARTS   AGE
		glusterfs   1/1     Running   0          16m
kubectl exec glusterfs -- mount | grep gluster
		10.99.7.11:container-volume on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

kubectl exec -it  glusterfs  bash
df -h
...
# 這裏看到的大小是glusterfs設置的quota,而不是PV設置的大小,需要驗證
10.99.7.11:container-volume  5.0G     0  5.0G   0% /mnt/glusterfs
  
cd /mnt/glusterfs
cat <<EOF > index.html
<html>
  <title>demo<title>
  <body>Hello world</body>
</html>
EOF

exit
kubectl exec glusterfs  -- cat /mnt/glusterfs/index.html
<html>
  <title>demo</title>
  <body>Hello world</body>
</html>

# 在glusterfs上查看文件
# ls /data/brick1/volume1
index.html

kubectl delete -f demo_mode1_nginx.yaml

2. 使用方式二(用於測試/或者產線):通過PVC-PV方式使用glusterfs存儲卷

操作人員: 應用部署人員

2.1. 配置PVC

cat glusterfs-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata: 
  name: glusterfs-vol-pvc01
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      name: "gluster-volume1"
kubectl apply -f glusterfs-pvc.yaml
kubectl get pvc
	NAME                  STATUS   VOLUME            CAPACITY   ACCESS MODES   STORAGECLASS   AGE
	glusterfs-vol-pvc01   Bound    gluster-volume1   1Gi        RWX                           4s

kubectl get pv
	NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                         STORAGECLASS   REASON   AGE
	gluster-volume1   1Gi        RWX            Retain           Bound    `default/glusterfs-vol-pvc01`                           113s

2.2. 【測試】使用PVC配置應用的持久存儲

cat demo_mode2_nginx.ymal

apiVersion: v1
kind: Service
metadata:
  name: svc-nginx
  labels:
    name: nginx
spec:
  type: NodePort
  ports:
  - name: nginx
    port: 80
    nodePort: 31180
  selector:
    name: nginx
---
kind: Deployment 
apiVersion: extensions/v1beta1 
metadata: 
  name: dm-nginx
spec: 
  replicas: 2
  template: 
    metadata: 
      labels: 
        name: nginx 
    spec: 
      containers: 
        - name: nginx 
          image: nginx
          ports: 
            - containerPort: 80
          volumeMounts:
            - name: glusterfs-vol-nginx
              mountPath: "/usr/share/nginx/html"
      volumes:
      - name: glusterfs-vol-nginx
        persistentVolumeClaim:
          claimName: glusterfs-vol-pvc01
 kubectl apply -f demo_mode2_nginx.ymal 
kubectl get svc|grep svc-nginx
	NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
	svc-nginx           NodePort    10.101.55.7     <none>        80:31180/TCP   39m
kubectl get deploy
	NAME       READY   UP-TO-DATE   AVAILABLE   AGE
	dm-nginx   2/2     2            2           40m
kubectl get pod -o wide
	NAME                       READY   STATUS    RESTARTS   AGE   IP             NODE               NOMINATED NODE   READINESS GATES
	dm-nginx-c4747f6f7-hp6w4   1/1     Running   0          42m   192.168.3.9    ejucsnode-shqs-1   <none>           <none>
	dm-nginx-c4747f6f7-n72h2   1/1     Running   0          42m   192.168.5.10   ejucsnode-shqs-3   <none>           <none>

2.3 驗證

# 我在glusterfs的存儲卷container-volume上方了一個index.html文件,內容是hello world
pod1:dm-nginx-c4747f6f7-hp6w4
kubectl exec dm-nginx-c4747f6f7-hp6w4 -- mount |grep glusterfs
	10.99.7.11:container-volume on /usr/share/nginx/html type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
curl 192.168.3.9
<html>
  <title>demo</title>
  <body>Hello world</body>
</html>

pod2: dm-nginx-c4747f6f7-n72h2
kubectl exec dm-nginx-c4747f6f7-n72h2 -- mount |grep glusterfs
	10.99.7.11:container-volume on /usr/share/nginx/html type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
curl 192.168.5.10
<html>
  <title>demo</title>
  <body>Hello world</body>
</html>

svc: 10.101.55.7
curl 10.101.55.7  #多次執行,反饋相同
<html>
  <title>demo</title>
  <body>Hello world</body>
</html>

nodePort: 31180  nodeIP: 10.99.12.201
curl 10.99.12.201:31180  
<html>
  <title>demo</title>
  <body>Hello world</body>
</html>

以上驗證過程,說明container-volume已經掛載到了2個nginx容器上,訪問正常。通過svc地址還是nodePort訪問都正常。

3. kubernetes使用storageclass自動配置使用glusterfs卷

kubernetes 自動模式用到storageclass自動配置(要做的工作一樣不少,只是由系統自動給我們配置了),參考如下:
Kubernetes初體驗(9) Kubernetes使用glusterfs+Heketi提供持久存儲PV(自動模式)

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章