開發環境k8s重啓問題

k8s 1.17.2高可用版本重啓問題

  • 開發環境平均 每10分鐘重啓一次。
  • 測試環境6天平均 每天重啓一次。
  • cube環境 每小時重啓一次。
  • ramdisk 到目前18小時沒有重啓
1.測試的環境:192.168.1.135 密碼:xxxx 配置:3master 8c32g etcd版本:3.2.20

1.1重啓次數:

[root@portal135 ~]# kubectl get pods -n kube-system|grep "kube-.*bocloud.com"
kube-apiserver-portal134.bocloud.com            1/1     Running   2          5d23h
kube-apiserver-portal135.bocloud.com            1/1     Running   8          5d23h
kube-apiserver-portal136.bocloud.com            1/1     Running   3          5d23h
kube-controller-manager-portal134.bocloud.com   1/1     Running   2          5d23h
kube-controller-manager-portal135.bocloud.com   1/1     Running   7          5d23h
kube-controller-manager-portal136.bocloud.com   1/1     Running   5          5d23h
kube-scheduler-portal134.bocloud.com            1/1     Running   2          5d23h
kube-scheduler-portal135.bocloud.com            1/1     Running   7          5d23h
kube-scheduler-portal136.bocloud.com            1/1     Running   5          5d23h

1.2響應時間:

[root@portal134 upload]# export NODE_IPS="192.168.1.134 192.168.1.135 192.168.1.136"
[root@portal134 upload]# for ip in ${NODE_IPS}; do   ETCDCTL_API=3 etcdctl   --endpoints=https://${ip}:2379    --cacert=/etc/etcd/ssl/ca.crt   --cert=/etc/etcd/ssl/client.crt   --key=/etc/etcd/ssl/client.key   endpoint health; done
https://192.168.1.134:2379 is healthy: successfully committed proposal: took = 2.711272ms
https://192.168.1.135:2379 is healthy: successfully committed proposal: took = 2.089683ms
https://192.168.1.136:2379 is healthy: successfully committed proposal: took = 1.935061ms
2.開發環境:192.168.2.103 密碼:xxxx 配置:3master 4c8g etcd版本:3.4.3

2.1重啓次數:

[root@boc-108 ~]# kubectl get pods -n kube-system|grep "kube-.*.dev"
kube-apiserver-boc-103.dev                 1/1     Running            63         4d4h
kube-apiserver-boc-104.dev                 1/1     Running            67         4d4h
kube-apiserver-boc-108.dev                 1/1     Running            82         4d4h
kube-controller-manager-boc-103.dev        1/1     Running            543        4d4h
kube-controller-manager-boc-104.dev        1/1     Running            561        4d4h
kube-controller-manager-boc-108.dev        1/1     Running            558        4d4h
kube-scheduler-boc-103.dev                 1/1     Running            562        4d4h
kube-scheduler-boc-104.dev                 1/1     Running            556        4d4h
kube-scheduler-boc-108.dev                 1/1     Running            561        4d4h

2.2響應時間:

[root@boc-103 test]# export NODE_IPS="192.168.2.103 192.168.2.104 192.168.2.108"
[root@boc-103 test]# for ip in ${NODE_IPS}; do   ETCDCTL_API=3 etcdctl   --endpoints=https://${ip}:2379    --cacert=/etc/etcd/ssl/ca.crt   --cert=/etc/etcd/ssl/client.crt   --key=/etc/etcd/ssl/client.key   endpoint health; done
https://192.168.2.103:2379 is healthy: successfully committed proposal: took = 1.318040913s
https://192.168.2.104:2379 is healthy: successfully committed proposal: took = 1.557571675s
https://192.168.2.108:2379 is healthy: successfully committed proposal: took = 54.749774ms
3. 錯誤現象:

​ kube-controller-manager和kube-scheduler重啓原因etcdserver: request timed out

E0224 03:54:36.433286       1 cronjob_controller.go:125] Failed to extract job list: etcdserver: request timed out
E0224 03:54:37.592575       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: etcdserver: request timed out
I0224 03:54:38.047024       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition
I0224 03:54:38.047089       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' portal135.bocloud.com_46d96c59-fa2a-43d3-aa0e-4c969a287338 stopped leading
F0224 03:54:38.047158       1 controllermanager.go:279] leaderelection lost

​ apiserver也出現多次

E0224 03:54:37.591267       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}
4.cube環境 10.10.5.8 密碼:xxxx 配置:3master 8c16g

4.1重啓次數:

[root@master ~]# kubectl get pods -n kube-system|grep .novalocal
kube-apiserver-master.novalocal            1/1     Running            1          8h
kube-apiserver-node-1.novalocal            1/1     Running            1          8h
kube-apiserver-node-2.novalocal            1/1     Running            1          8h
kube-controller-manager-master.novalocal   1/1     Running            7          8h
kube-controller-manager-node-1.novalocal   1/1     Running            8          8h
kube-controller-manager-node-2.novalocal   1/1     Running            6          8h
kube-scheduler-master.novalocal            1/1     Running            8          8h
kube-scheduler-node-1.novalocal            1/1     Running            6          8h
kube-scheduler-node-2.novalocal            1/1     Running            8          8h

4.2響應時間:

https://10.10.5.8:2379 is healthy: successfully committed proposal: took = 15.970648ms
https://10.10.5.48:2379 is healthy: successfully committed proposal: took = 13.325127ms
https://10.10.5.38:2379 is healthy: successfully committed proposal: took = 18.190006ms
5.解決辦法

5.1 擴容到32g6c,但io操作依舊很慢,關機在於硬盤的讀寫速度。

5.2 做一個ramdisk。把內存掛載到/var/lib/etcd下當做硬盤存儲etcd數據。

mkdir /var/lib/etcd && mount -t tmpfs -o size=2G tmpfs /var/lib/etcd && echo "tmpfs                    /var/lib/etcd    tmpfs   defaults,size=2G        0 0" >> /etc/fstab

5.2.1 io操作對比結果:

  • ramdisk 2.3 GB/s
  • 數據盤 24.0 MB/s
[root@node-139 ~]# dd bs=1M count=1000 if=/dev/zero of=/media/tmp/a.txt conv=fdatasync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 0.457043 s, 2.3 GB/s
[root@node-139 ~]# dd bs=1M count=1000 if=/dev/zero of=/home/a.txt conv=fdatasync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 43.7001 s, 24.0 MB/s
注:/media/tmp是掛載ramdisk的路徑 

5.3 驗證結果:etcd的響應速度有明顯提升。

[root@node-218 ~]# for ip in ${NODE_IPS}; do   ETCDCTL_API=3 etcdctl   --endpoints=https://${ip}:2379    --cacert=/etc/etcd/ssl/ca.crt   --cert=/etc/etcd/ssl/client.crt   --key=/etc/etcd/ssl/client.key   endpoint health; done
https://192.168.2.218:2379 is healthy: successfully committed proposal: took = 11.960397ms
https://192.168.2.219:2379 is healthy: successfully committed proposal: took = 10.482817ms
https://192.168.2.220:2379 is healthy: successfully committed proposal: took = 10.786569ms
[root@node-218 ~]# kubectl get pods -n kube-system|grep .dev
kube-apiserver-node-218.dev                1/1     Running            0          10m
kube-apiserver-node-219.dev                1/1     Running            0          10m
kube-apiserver-node-220.dev                1/1     Running            0          11m
kube-controller-manager-node-218.dev       1/1     Running            1          10m
kube-controller-manager-node-219.dev       1/1     Running            1          10m
kube-controller-manager-node-220.dev       1/1     Running            1          11m
kube-scheduler-node-218.dev                1/1     Running            1          10m
kube-scheduler-node-219.dev                1/1     Running            0          11m
kube-scheduler-node-220.dev                1/1     Running            1          11m
[root@node-218 ~]# df -h
Filesystem                Size  Used Avail Use% Mounted on
/dev/mapper/centos-root    28G  3.5G   25G  13% /
devtmpfs                  5.8G     0  5.8G   0% /dev
tmpfs                     5.8G     0  5.8G   0% /dev/shm
tmpfs                     5.8G   34M  5.8G   1% /run
tmpfs                     5.8G     0  5.8G   0% /sys/fs/cgroup
/dev/sda1                1014M  145M  870M  15% /boot
tmpfs                     1.2G     0  1.2G   0% /run/user/0
tmpfs                     2.0G  125M  1.9G   7% /var/lib/etcd
192.168.2.214:/opt/share   28G  2.5G   26G   9% /abcsys/upload

5.4 經過18小時後0次重啓

[root@node-218 ~]# kubectl get pods -n kube-system|grep dev
kube-apiserver-node-218.dev                1/1     Running            0          18h
kube-apiserver-node-219.dev                1/1     Running            0          18h
kube-apiserver-node-220.dev                1/1     Running            0          18h
kube-controller-manager-node-218.dev       1/1     Running            1          18h
kube-controller-manager-node-219.dev       1/1     Running            1          18h
kube-controller-manager-node-220.dev       1/1     Running            1          18h
kube-scheduler-node-218.dev                1/1     Running            1          18h
kube-scheduler-node-219.dev                1/1     Running            0          18h
kube-scheduler-node-220.dev                1/1     Running            1          18h

5.5 使用ramdisk純在的問題

由於數據是存儲在內存中的,斷電即丟失。

6 總結

​ 開發環境需要提升硬盤的讀寫速度,現在的硬盤讀寫熟讀只有24.0 MB/s,導致etcd數據庫經常出現request timeout,健康檢查超時,服務重啓等問題。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章