解決Kubernetes1.15.1 部署Flannel網絡後pod及容器無法跨主機互通問題

記一次部署Flannel網絡後網絡不通問題, 查詢網上資料無果

自己記錄一下解決過程

現象

[root@k8s-master01 ~]# kubectl get pod -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-54j5c                1/1     Running   0          5h44m
coredns-5c98db65d4-jmvbf                1/1     Running   0          5h45m
etcd-k8s-master01                       1/1     Running   2          10d
kube-apiserver-k8s-master01             1/1     Running   2          10d
kube-controller-manager-k8s-master01    1/1     Running   3          10d
kube-flannel-ds-amd64-6h79p             1/1     Running   2          9d
kube-flannel-ds-amd64-bnvtd             1/1     Running   3          10d
kube-flannel-ds-amd64-bsq4j             1/1     Running   2          9d
kube-proxy-5fn9m                        1/1     Running   1          9d
kube-proxy-6hjvp                        1/1     Running   2          9d
kube-proxy-t47n9                        1/1     Running   2          10d
kube-scheduler-k8s-master01             1/1     Running   4          10d
kubernetes-dashboard-7d75c474bb-hg7zt   1/1     Running   0          71m
[root@k8s-master01 ~]# kubectl get node
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   10d   v1.15.1
k8s-node01     Ready    <none>   9d    v1.15.1
k8s-node02     Ready    <none>   9d    v1.15.1

由以上可以看到我部署Flannel以後, master檢測到node節點 並且flannel容器顯示Running正常

排查問題

[root@k8s-master01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:2c:d1:c2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.50/24 brd 192.168.0.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe2c:d1c2/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:1f:d8:95:21 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ee:02:3a:98:e3:e3 brd ff:ff:ff:ff:ff:ff
5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether d2:c2:72:50:95:31 brd ff:ff:ff:ff:ff:ff
    inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.110.65.174/32 brd 10.110.65.174 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
6: flannel.1: <BROADCAST,MULTICAST> mtu 1450 qdisc noqueue state DOWN group default 
    link/ether 7e:35:6d:f9:50:c3 brd ff:ff:ff:ff:ff:ff
7: cni0: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 8a:1b:ab:4c:83:c9 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.1/24 scope global cni0
       valid_lft forever preferred_lft forever

6: flannel.1網絡沒有ip信息, 並且顯示DOWN的狀態

[root@k8s-master01 flannel]# ping 10.244.2.6
PING 10.244.2.6 (10.244.2.6) 56(84) bytes of data.
^C
--- 10.244.2.6 ping statistics ---
13 packets transmitted, 0 received, 100% packet loss, time 12004ms
[root@k8s-node01 ~]# ping 10.244.2.6
PING 10.244.2.6 (10.244.2.6) 56(84) bytes of data.
^C
--- 10.244.2.6 ping statistics ---
36 packets transmitted, 0 received, 100% packet loss, time 35012ms
[root@k8s-node02 ~]# ping 10.244.2.6
PING 10.244.2.6 (10.244.2.6) 56(84) bytes of data.
64 bytes from 10.244.2.6: icmp_seq=1 ttl=64 time=0.131 ms
64 bytes from 10.244.2.6: icmp_seq=2 ttl=64 time=0.042 ms
^C
--- 10.244.2.6 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms

一個存在與node2的pod只有node2能ping 通, 其他節點全部超時

解決

方法1

[root@k8s-node01 ~]# sudo iptables -P INPUT ACCEPT
[root@k8s-node01 ~]# sudo iptables -P OUTPUT ACCEPT
[root@k8s-node01 ~]# sudo iptables -P FORWARD ACCEPT
[root@k8s-node01 ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0           

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
KUBE-FORWARD  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
ACCEPT     all  --  10.244.0.0/16        0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            10.244.0.0/16       

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0           

Chain DOCKER (0 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION-STAGE-1 (0 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION-STAGE-2 (0 references)
target     prot opt source               destination         

Chain DOCKER-USER (0 references)
target     prot opt source               destination         

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination         
DROP       all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  10.244.0.0/16        0.0.0.0/0            /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            10.244.0.0/16        /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
[root@k8s-node01 ~]# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[  確定  ]

清理IPTABLES規則, 保存
問題沒有解決 使用方法二

方法2

卸載flannel網絡

#第一步,在master節點刪除flannel
kubectl delete -f kube-flannel.yml

#第二步,在node節點清理flannel網絡留下的文件
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
rm -f /etc/cni/net.d/*

重新部署Flannel網絡

[root@k8s-master01 flannel]# kubectl create -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
[root@k8s-master01 flannel]# kubectl get pod -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-8bpdd                1/1     Running   0          17s
coredns-5c98db65d4-knfcj                1/1     Running   0          43s
etcd-k8s-master01                       1/1     Running   2          10d
kube-apiserver-k8s-master01             1/1     Running   2          10d
kube-controller-manager-k8s-master01    1/1     Running   3          10d
kube-flannel-ds-amd64-56hsf             1/1     Running   0          25m
kube-flannel-ds-amd64-56t49             1/1     Running   0          25m
kube-flannel-ds-amd64-qz42z             1/1     Running   0          25m
kube-proxy-5fn9m                        1/1     Running   1          10d
kube-proxy-6hjvp                        1/1     Running   2          10d
kube-proxy-t47n9                        1/1     Running   2          10d
kube-scheduler-k8s-master01             1/1     Running   4          10d
kubernetes-dashboard-7d75c474bb-4r7hc   1/1     Running   0          23m
[root@k8s-master01 flannel]# 

重新部署Flannel網絡後 容器需要重置, 刪除就可以 k8s會重新自動添加

[root@k8s-master01 flannel]# ping 10.244.1.2
PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.
64 bytes from 10.244.1.2: icmp_seq=1 ttl=63 time=1.04 ms
64 bytes from 10.244.1.2: icmp_seq=2 ttl=63 time=0.498 ms
64 bytes from 10.244.1.2: icmp_seq=3 ttl=63 time=0.575 ms
64 bytes from 10.244.1.2: icmp_seq=4 ttl=63 time=0.578 ms
[root@k8s-node01 ~]# ping 10.244.1.2
PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.
64 bytes from 10.244.1.2: icmp_seq=1 ttl=64 time=0.065 ms
64 bytes from 10.244.1.2: icmp_seq=2 ttl=64 time=0.038 ms
64 bytes from 10.244.1.2: icmp_seq=3 ttl=64 time=0.135 ms
64 bytes from 10.244.1.2: icmp_seq=4 ttl=64 time=0.058 ms
^C
[root@k8s-node02 ~]# ping 10.244.1.2
PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.
64 bytes from 10.244.1.2: icmp_seq=1 ttl=63 time=0.760 ms
64 bytes from 10.244.1.2: icmp_seq=2 ttl=63 time=0.510 ms
64 bytes from 10.244.1.2: icmp_seq=3 ttl=63 time=0.442 ms
64 bytes from 10.244.1.2: icmp_seq=4 ttl=63 time=0.525 ms
^C
[root@k8s-master01 flannel]# ifconfig 
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:1f:d8:95:21  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.50  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::20c:29ff:fe2c:d1c2  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:2c:d1:c2  txqueuelen 1000  (Ethernet)
        RX packets 737868  bytes 493443231 (470.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1656623  bytes 3510224771 (3.2 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.0  netmask 255.255.255.255  broadcast 0.0.0.0
        ether aa:50:d6:f9:09:e5  txqueuelen 0  (Ethernet)
        RX packets 14  bytes 1728 (1.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 67  bytes 5973 (5.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1  (Local Loopback)
        RX packets 6944750  bytes 1242999056 (1.1 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6944750  bytes 1242999056 (1.1 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@k8s-master01 flannel]# 

flannel網絡顯示正常, 容器之間可以跨主機互通!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章