Kubernetes实录系列记录文档完整目录参考: Kubernetes实录-目录
相关记录链接地址 :
一、准备工作
1. GlusterFS环境信息
主机 | ip地址 | 角色 | 备注 |
---|---|---|---|
gluster-server-1 | 10.99.7.11 | gluster-server | |
gluster-server-2 | 10.99.7.12 | gluster-server | |
gluster-server-2 | 10.99.7.12 | gluster-server |
# gluster pool list
UUID Hostname State
6f32f6d4-9cd7-4b40-b7b6-100054b187f7 10.99.7.12 Connected
5669dbef-3f71-4e3c-8fb1-9c6e11c0c434 10.99.7.13 Connected
7e77b653-263c-4a25-aa29-a91123b27bf4 localhost Connected
2. 所有Kubernetes节点安装glusterfs-client
yum install centos-release-gluster -y
yum install glusterfs glusterfs-fuse socat -y
# rpm -aq |grep glusterfs
glusterfs-libs-5.2-1.el7.x86_64
glusterfs-5.2-1.el7.x86_64
glusterfs-fuse-5.2-1.el7.x86_64
glusterfs-client-xlators-5.2-1.el7.x86_64
3. 所有Kubernetes节点加载dm_thin_pool模块
# 配置模块开机自加载
cat <<EOF > /etc/sysconfig/modules/custom.modules
modprobe dm_thin_pool
EOF
chmod +x /etc/sysconfig/modules/custom.modules
source /etc/sysconfig/modules/custom.modules
# lsmod |grep dm_thin_pool
dm_thin_pool 66298 0
dm_persistent_data 75269 1 dm_thin_pool
dm_bio_prison 18209 1 dm_thin_pool
dm_mod 123941 2 dm_bufio,dm_thin_pool
4. 创建一个测试使用的glusterfs存储卷
#[gluster server node]
gluster volume create container-volume replica 3 10.99.7.11:/data/brick1/volume1 10.99.7.12:/data/brick1/volume1 10.99.7.13:/data/brick1/volume1 force
gluster volume start container-volume
# gluster volume status container-volume
Status of volume: container-volume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.99.7.11:/data/brick1/volume1 49152 0 Y 5556
Brick 10.99.7.12:/data/brick1/volume1 49152 0 Y 4684
Brick 10.99.7.13:/data/brick1/volume1 49152 0 Y 4562
Self-heal Daemon on localhost N/A N/A Y 5579
Self-heal Daemon on 10.99.7.12 N/A N/A Y 4707
Self-heal Daemon on 10.99.7.13 N/A N/A Y 4585
# gluster volume info container-volume
Volume Name: container-volume
Type: Replicate
Volume ID: 7f6eb3e0-2a40-41dc-b6fb-4ca493c676dd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.99.7.11:/data/brick1/volume1
Brick2: 10.99.7.12:/data/brick1/volume1
Brick3: 10.99.7.13:/data/brick1/volume1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
# 如下调优或者配额可以根据实际需要进行配置或者默认不配置
gluster volume quota container-volume enable
gluster volume quota container-volume limit-usage / 5GB
gluster volume set container-volume performance.cache-size 64MB
gluster volume set container-volume performance.io-thread-count 16
gluster volume set container-volume network.ping-timeout 10
gluster volume set container-volume performance.write-behind-window-size 128MB
二、Kubernetes配置使用GlusterFS存储卷(手动模式
)
系统管理员负责创建endpoint,service,pv。 其中endpoint和service可以复用;应用部署人员配置PVC以及部署应用使用持久存储卷
- 使用方式一(用于测试):直接创建pod指定volume(endpoint),(不使用pv,只要配置了endpoints就可以)
- 使用方式二(用于测试/或者产线):通过PVC-PV方式使用glusterfs存储卷
1. 配置endpoints
操作人员: 系统管理员
cat glusterfs-endpoints.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster
subsets:
- addresses:
- ip: 10.99.7.11
- ip: 10.99.7.12
- ip: 10.99.7.13
ports:
- port: 1
protocol: TCP
备注:可以在port字段中提供任何有效值(1~65535),实际没有用到这个值
kubectl apply -f glusterfs-endpoints.yaml
kubectl get endpoints
NAME ENDPOINTS AGE
glusterfs-cluster 10.99.7.11:1,10.99.7.12:1,10.99.7.13:1 71m
2. 配置service
操作人员: 系统管理员
cat glusterfs-service.yaml
apiVersion: v1
kind: Service
metadata:
name: glusterfs-cluster
spec:
ports:
- port: 1
kubectl apply -f glusterfs-service.yaml
# kubectl get svc glusterfs-cluster
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
glusterfs-cluster ClusterIP 10.101.122.95 <none> 1/TCP 26m
3. 配置PV (persistentVolume)
操作人员: 系统管理员
cat glusterfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster-volume1
labels:
name: gluster-volume1
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs-cluster"
path: "container-volume"
readOnly: false
kubectl apply -f glusterfs-pv.yaml
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
gluster-volume1 1Gi RWX Retain Available 97s
三、Kubernetes配置使用GlusterFS存储卷示例(手动模式
)
Kubernetes手动模式
使用GlusterFS存储卷可以有2种方式:
- 使用方式一(用于测试):直接创建pod指定volume(endpoint),(不使用service,pv,pvc等,这样系统管理员只要配置了endpoints就可以)
- 使用方式二(用于测试/或者产线):通过PVC-PV方式使用glusterfs存储卷
kubernetes 自动模式
用到storageclass自动配置(要做的工作一样不少,只是由系统自动给我们配置了),参考如下:
Kubernetes初体验(9) Kubernetes使用glusterfs+Heketi提供持久存储PV(自动模式)
1. 使用方式一(用于测试):直接创建pod指定volume(endpoint) .
测试样例来自github,参考,github上的文件使用的是json,我这里改为了yaml格式
操作人员: 应用部署人员
cat demo_mode1_nginx.yaml
kind: Pod
apiVersion: v1
metadata:
name: glusterfs
spec:
containers:
- name: glusterfs
image: nginx
volumeMounts:
- name: glusterfsvol
mountPath: "/mnt/glusterfs"
volumes:
- name: glusterfsvol
glusterfs:
endpoints: "glusterfs-cluster"
path: "container-volume"
readOnly: false
kubectl apply -f demo_mode1_nginx.yaml
kubectl get pods
NAME READY STATUS RESTARTS AGE
glusterfs 1/1 Running 0 16m
kubectl exec glusterfs -- mount | grep gluster
10.99.7.11:container-volume on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
kubectl exec -it glusterfs bash
df -h
...
# 这里看到的大小是glusterfs设置的quota,而不是PV设置的大小,需要验证
10.99.7.11:container-volume 5.0G 0 5.0G 0% /mnt/glusterfs
cd /mnt/glusterfs
cat <<EOF > index.html
<html>
<title>demo<title>
<body>Hello world</body>
</html>
EOF
exit
kubectl exec glusterfs -- cat /mnt/glusterfs/index.html
<html>
<title>demo</title>
<body>Hello world</body>
</html>
# 在glusterfs上查看文件
# ls /data/brick1/volume1
index.html
kubectl delete -f demo_mode1_nginx.yaml
2. 使用方式二(用于测试/或者产线):通过PVC-PV方式使用glusterfs存储卷
操作人员: 应用部署人员
2.1. 配置PVC
cat glusterfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfs-vol-pvc01
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
name: "gluster-volume1"
kubectl apply -f glusterfs-pvc.yaml
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
glusterfs-vol-pvc01 Bound gluster-volume1 1Gi RWX 4s
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
gluster-volume1 1Gi RWX Retain Bound `default/glusterfs-vol-pvc01` 113s
2.2. 【测试】使用PVC配置应用的持久存储
cat demo_mode2_nginx.ymal
apiVersion: v1
kind: Service
metadata:
name: svc-nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- name: nginx
port: 80
nodePort: 31180
selector:
name: nginx
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: dm-nginx
spec:
replicas: 2
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: glusterfs-vol-nginx
mountPath: "/usr/share/nginx/html"
volumes:
- name: glusterfs-vol-nginx
persistentVolumeClaim:
claimName: glusterfs-vol-pvc01
kubectl apply -f demo_mode2_nginx.ymal
kubectl get svc|grep svc-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-nginx NodePort 10.101.55.7 <none> 80:31180/TCP 39m
kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
dm-nginx 2/2 2 2 40m
kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dm-nginx-c4747f6f7-hp6w4 1/1 Running 0 42m 192.168.3.9 ejucsnode-shqs-1 <none> <none>
dm-nginx-c4747f6f7-n72h2 1/1 Running 0 42m 192.168.5.10 ejucsnode-shqs-3 <none> <none>
2.3 验证
# 我在glusterfs的存储卷container-volume上方了一个index.html文件,内容是hello world
pod1:dm-nginx-c4747f6f7-hp6w4
kubectl exec dm-nginx-c4747f6f7-hp6w4 -- mount |grep glusterfs
10.99.7.11:container-volume on /usr/share/nginx/html type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
curl 192.168.3.9
<html>
<title>demo</title>
<body>Hello world</body>
</html>
pod2: dm-nginx-c4747f6f7-n72h2
kubectl exec dm-nginx-c4747f6f7-n72h2 -- mount |grep glusterfs
10.99.7.11:container-volume on /usr/share/nginx/html type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
curl 192.168.5.10
<html>
<title>demo</title>
<body>Hello world</body>
</html>
svc: 10.101.55.7
curl 10.101.55.7 #多次执行,反馈相同
<html>
<title>demo</title>
<body>Hello world</body>
</html>
nodePort: 31180 nodeIP: 10.99.12.201
curl 10.99.12.201:31180
<html>
<title>demo</title>
<body>Hello world</body>
</html>
以上验证过程,说明container-volume已经挂载到了2个nginx容器上,访问正常。通过svc地址还是nodePort访问都正常。
3. kubernetes使用storageclass自动配置使用glusterfs卷
kubernetes 自动模式
用到storageclass自动配置(要做的工作一样不少,只是由系统自动给我们配置了),参考如下:
Kubernetes初体验(9) Kubernetes使用glusterfs+Heketi提供持久存储PV(自动模式)