Kubernetes(K8S)(七)——網絡插件flannel和calico應用於跨主機調度通信

1.網絡插件flannel

跨主機通信的一個解決方案是Flannel,由CoreOS推出,支持3種實現:UDP、VXLAN、host-gw
udp模式:使用設備flannel.0進行封包解包,不是內核原生支持,上下文切換較大,性能非常差
vxlan模式:使用flannel.1進行封包解包,內核原生支持,性能較強
host-gw模式:無需flannel.1這樣的中間設備,直接宿主機當作子網的下一跳地址,性能最強
host-gw的性能損失大約在10%左右,而其他所有基於VXLAN“隧道”機制的網絡方案,性能損失在20%~30%左右

1.1 Flannel——vxlan模式跨主機通信

什麼是VXLAN?
VXLAN,即Virtual Extensible LAN(虛擬可擴展局域網),是Linux本身支持的一網種網絡虛擬化技術。VXLAN可以完全在內核態實現封裝和解封裝工作,從而通過“隧道”機制,構建出覆蓋網絡(Overlay Network)

VXLAN的設計思想是:
在現有的三層網絡之上,“覆蓋”一層虛擬的、由內核VXLAN模塊負責維護的二層網絡,使得連接在這個VXLAN二nfcu網絡上的“主機”(虛擬機或容器都可以),可以像在同一個局域網(LAN)裏那樣自由通信。
爲了能夠在二nfcu網絡上打通“隧道”,VXLAN會在宿主機上設置一個特殊的網絡設備作爲“隧道”的兩端,叫VTEP:VXLAN Tunnel End Point(虛擬隧道端點)

Flannel vxlan模式跨主機通信原理
在這裏插入圖片描述
flanel.1設備,就是VXLAN的VTEP,即有IP地址,也有MAC地址
與UPD模式類似,當container-發出請求後,上的地址10.1.16.3的IP包,會先出現在docker網橋,再路由到本機的flannel.1設備進行處理(進站)
爲了能夠將“原始IP包”封裝併發送到正常的主機,VXLAN需要找到隧道的出口:上的宿主機的VTEP設備,這個設備信息,由宿主機的flanneld進程維護
VTEP設備之間通過二層數據楨進行通信
源VTEP設備收到原始IP包後,在上面加上一個目的MAC地址,封裝成一個導去數據楨,發送給目的VTEP設備(獲取 MAC地址需要通過三層IP地址查詢,這是ARP表的功能)

在這裏插入圖片描述
封裝過程只是加了一個二層頭,不會改變“原始IP包”的內容
這些VTEP設備的MAC地址,對宿主機網絡來說沒什麼實際意義,稱爲內部數據楨,並不能在宿主機的二層網絡傳輸,Linux內核還需要把它進一步封裝成爲宿主機的一個普通的數據楨,好讓它帶着“內部數據楨”通過宿主機的eth0進行傳輸,Linux會在內部數據楨前面,加上一個我死的VXLAN頭,VXLAN頭裏有一個重要的標誌叫VNI,它是VTEP識別某個數據楨是不是應該歸自己處理的重要標識。
在Flannel中,VNI的默認值是1,這也是爲什麼宿主機的VTEP設備都叫flannel.1的原因
一個flannel.1設備只知道另一端flannel.1設備的MAC地址,卻不知道對應的宿主機地址是什麼。
在linux內核裏面,網絡設備進行轉發的依據,來自FDB的轉發數據庫,這個flannel.1網橋對應的FDB信息,是由flanneld進程維護的。linux內核再在IP包前面加上二層數據楨頭,把Node2的MAC地址填進去。這個MAC地址本身,是Node1的ARP表要學習的,需Flannel維護,這時候Linux封裝的“外部數據楨”的格式如下

在這裏插入圖片描述
然後Node1的flannel.1設備就可以把這個數據楨從eth0發出去,再經過宿主機網絡來到Node2的eth0
Node2的內核網絡棧會發現這個數據楨有VXLAN Header,並且VNI爲1,Linux內核會對它進行拆包,拿到內部數據楨,根據VNI的值,所它交給Node2的flannel.1設備


1.2 Flannel——host-gw模式跨主機通信(純三層)

這是一種純三層網絡的方案,性能最高。
howt-gw模式的工作原理,就是將每個Flannel子網的下一跳,設置成了該子網對應的宿主機的IP地址,也就是說,宿主機(host)充當了這條容器通信路徑的“網關”(Gateway),這正是host-gw的含義。
所有的子網和主機的信息,都保存在Etcd中,flanneld只需要watch這些數據的變化 ,實時更新路由表就行了。
核心是IP包在封裝成楨的時候,使用路由表的“下一跳”設置上的MAC地址,這樣可以經過二層網絡到達目的宿主機。

[kubeadm@server1 mainfest]$ cp /home/kubeadm/kube-flannel.yml .
[kubeadm@server1 mainfest]$ ls
cronjob.yml  daemonset.yml  deployment.yml  init.yml  job.yml  kube-flannel.yml  pod2.yml  pod.yml  rs.yml  service.yml
[kubeadm@server1 mainfest]$ kubectl delete -f kube-flannel.yml 
podsecuritypolicy.policy "psp.flannel.unprivileged" deleted
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
serviceaccount "flannel" deleted
configmap "kube-flannel-cfg" deleted
daemonset.apps "kube-flannel-ds-amd64" deleted
daemonset.apps "kube-flannel-ds-arm64" deleted
daemonset.apps "kube-flannel-ds-arm" deleted
daemonset.apps "kube-flannel-ds-ppc64le" deleted
daemonset.apps "kube-flannel-ds-s390x" deleted
[kubeadm@server1 mainfest]$ vim kube-flannel.yml 
[kubeadm@server1 mainfest]$ kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
[kubeadm@server1 mainfest]$ kubectl get pod -n kube-system 
NAME                              READY   STATUS    RESTARTS   AGE
coredns-698fcc7d7c-n8j7q          1/1     Running   4          7d13h
coredns-698fcc7d7c-r6tsw          1/1     Running   4          7d13h
etcd-server1                      1/1     Running   4          7d13h
kube-apiserver-server1            1/1     Running   4          7d13h
kube-controller-manager-server1   1/1     Running   4          7d13h
kube-flannel-ds-amd64-n67nh       1/1     Running   0          30s
kube-flannel-ds-amd64-qd4nw       1/1     Running   0          30s
kube-flannel-ds-amd64-wg2tg       1/1     Running   0          30s
kube-proxy-4xlms                  1/1     Running   0          10h
kube-proxy-gx7jc                  1/1     Running   0          10h
kube-proxy-n58d5                  1/1     Running   0          10h
kube-scheduler-server1            1/1     Running   4          7d13h

在這裏插入圖片描述在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述在這裏插入圖片描述

[kubeadm@server1 mainfest]$ vim pod2.yml 
[kubeadm@server1 mainfest]$ cat pod2.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-example
spec:
  replicas: 4
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v2
[kubeadm@server1 mainfest]$ kubectl apply -f pod2.yml 
deployment.apps/deployment-example configured
[kubeadm@server1 mainfest]$ vim service.yml 
[kubeadm@server1 mainfest]$ cat service.yml 
kind: Service
apiVersion: v1
metadata:
  name: myservice
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: myapp
  type: NodePort
[kubeadm@server1 mainfest]$ kubectl apply -f service.yml 
service/myservice created
[kubeadm@server1 mainfest]$ kubectl get pod -o wide
NAME                                  READY   STATUS    RESTARTS   AGE     IP            NODE      NOMINATED NODE   READINESS GATES
deployment-example-67764dd8bd-495p4   1/1     Running   0          53s     10.244.1.51   server2   <none>           <none>
deployment-example-67764dd8bd-jl7nl   1/1     Running   1          3h52m   10.244.1.50   server2   <none>           <none>
deployment-example-67764dd8bd-psr8v   1/1     Running   0          53s     10.244.2.76   server3   <none>           <none>
deployment-example-67764dd8bd-zvd28   1/1     Running   1          3h52m   10.244.2.75   server3   <none>           <none>
[kubeadm@server1 mainfest]$ kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        7d15h
myservice    NodePort    10.102.1.239   <none>        80:31334/TCP   21s

在這裏插入圖片描述在這裏插入圖片描述在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述


1.1 Flannel——vxlan+directrouting模式

[kubeadm@server1 mainfest]$ kubectl delete -f kube-flannel.yml 
podsecuritypolicy.policy "psp.flannel.unprivileged" deleted
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
serviceaccount "flannel" deleted
configmap "kube-flannel-cfg" deleted
daemonset.apps "kube-flannel-ds-amd64" deleted
daemonset.apps "kube-flannel-ds-arm64" deleted
daemonset.apps "kube-flannel-ds-arm" deleted
daemonset.apps "kube-flannel-ds-ppc64le" deleted
daemonset.apps "kube-flannel-ds-s390x" deleted
[kubeadm@server1 mainfest]$ vim kube-flannel.yml 
[kubeadm@server1 mainfest]$ kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

[kubeadm@server1 mainfest]$ kubectl get pod -n kube-system 
NAME                              READY   STATUS    RESTARTS   AGE
coredns-698fcc7d7c-n8j7q          1/1     Running   5          7d15h
coredns-698fcc7d7c-r6tsw          1/1     Running   5          7d15h
etcd-server1                      1/1     Running   5          7d15h
kube-apiserver-server1            1/1     Running   5          7d15h
kube-controller-manager-server1   1/1     Running   5          7d15h
kube-flannel-ds-amd64-6h7l7       1/1     Running   0          8s
kube-flannel-ds-amd64-7gtj9       1/1     Running   0          8s
kube-flannel-ds-amd64-l4fwl       1/1     Running   0          8s
kube-proxy-4xlms                  1/1     Running   2          13h
kube-proxy-gx7jc                  1/1     Running   2          13h
kube-proxy-n58d5                  1/1     Running   2          13h
kube-scheduler-server1            1/1     Running   5          7d15h
[kubeadm@server1 mainfest]$ kubectl get pod -o wide
NAME                                  READY   STATUS    RESTARTS   AGE     IP            NODE      NOMINATED NODE   READINESS GATES
deployment-example-67764dd8bd-495p4   1/1     Running   0          19m     10.244.1.51   server2   <none>           <none>
deployment-example-67764dd8bd-jl7nl   1/1     Running   1          4h10m   10.244.1.50   server2   <none>           <none>
deployment-example-67764dd8bd-psr8v   1/1     Running   0          19m     10.244.2.76   server3   <none>           <none>
deployment-example-67764dd8bd-zvd28   1/1     Running   1          4h10m   10.244.2.75   server3   <none>           <none>
[kubeadm@server1 mainfest]$ kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        7d15h
myservice    NodePort    10.102.1.239   <none>        80:31334/TCP   19m

在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述***

2.網絡插件calico

參考官網:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

calico除了可以解決網絡通信還可以解決網絡策略
在私有倉庫新建一個calico項目,用來存放calico鏡像
拉取calico所需鏡像

[root@server1 harbor]# docker pull calico/cni:v3.14.1
[root@server1 harbor]# docker pull calico/pod2daemon-flexvol:v3.14.1
[root@server1 harbor]# docker pull calico/node:v3.14.1 
[root@server1 harbor]# docker pull calico/kube-controllers:v3.14.1
[root@server1 harbor]# docker images| grep calico|awk '{print $1":"$2}'
calico/node:v3.14.1
calico/pod2daemon-flexvol:v3.14.1
calico/cni:v3.14.1
calico/kube-controllers:v3.14.1
[root@server1 harbor]# for i in `docker images| grep calico|awk '{print $1":"$2}'`;do docker tag $i reg.red.org/$i;done
[root@server1 harbor]# docker images| grep reg.red.org\/calico|awk '{print $1":"$2}'
reg.red.org/calico/node:v3.14.1
reg.red.org/calico/pod2daemon-flexvol:v3.14.1
reg.red.org/calico/cni:v3.14.1
reg.red.org/calico/kube-controllers:v3.14.1
[root@server1 harbor]# for i in `docker images| grep reg.red.org\/calico|awk '{print $1":"$2}'`;do docker push $i;done

在這裏插入圖片描述

##########在安裝calico網絡插件前先刪除flannel網絡插件,防止兩個插件在運行時發生衝突
[root@server2 mainfest]# kubectl delete -f kube-flannel.yml 
podsecuritypolicy.policy "psp.flannel.unprivileged" deleted
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
serviceaccount "flannel" deleted
configmap "kube-flannel-cfg" deleted
daemonset.apps "kube-flannel-ds-amd64" deleted
daemonset.apps "kube-flannel-ds-arm64" deleted
daemonset.apps "kube-flannel-ds-arm" deleted
daemonset.apps "kube-flannel-ds-ppc64le" deleted
daemonset.apps "kube-flannel-ds-s390x" deleted
[root@server2 mainfest]# cd /etc/cni/net.d/ 
[root@server2 net.d]# ls
10-flannel.conflist
[root@server2 net.d]# mv 10-flannel.conflist /mnt  ############所以節點都執行

在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述

[kubeadm@server1 ~]$ wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml ##下載所需calico配置文件
kubeadm@server1 mainfest]$ cp /home/kubeadm/calico.yaml .
[kubeadm@server1 mainfest]$ ls
calico.yaml  cronjob.yml  daemonset.yml  deployment.yml  init.yml  job.yml  kube-flannel.yml  pod2.yml  pod.yml  rs.yml  service.yml
[kubeadm@server1 mainfest]$ vim calico.yaml 
# Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "off"

[kubeadm@server1 mainfest]$ kubectl apply -f calico.yaml 
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
[kubeadm@server1 mainfest]$ kubectl get pod -n kube-system 
NAME                                       READY   STATUS            RESTARTS   AGE
calico-kube-controllers-76d4774d89-r2jg9   1/1     Running           0          15s
calico-node-8dvkh                          0/1     PodInitializing   0          15s
calico-node-l6kw6                          0/1     Init:0/3          0          15s
calico-node-lbqhr                          0/1     PodInitializing   0          15s
coredns-698fcc7d7c-n8j7q                   1/1     Running           5          7d16h
coredns-698fcc7d7c-r6tsw                   1/1     Running           5          7d16h
etcd-server1                               1/1     Running           5          7d16h
kube-apiserver-server1                     1/1     Running           5          7d16h
kube-controller-manager-server1            1/1     Running           5          7d16h
kube-proxy-4xlms                           1/1     Running           2          13h
kube-proxy-gx7jc                           1/1     Running           2          13h
kube-proxy-n58d5                           1/1     Running           2          13h
kube-scheduler-server1                     1/1     Running           5          7d16h
[kubeadm@server1 mainfest]$ kubectl get pod -n kube-system 
NAME                                       READY   STATUS     RESTARTS   AGE
calico-kube-controllers-76d4774d89-r2jg9   1/1     Running    0          103s
calico-node-8dvkh                          0/1     Running    0          103s
calico-node-l6kw6                          0/1     Init:2/3   0          103s
calico-node-lbqhr                          0/1     Running    0          103s
coredns-698fcc7d7c-n8j7q                   1/1     Running    5          7d16h
coredns-698fcc7d7c-r6tsw                   1/1     Running    5          7d16h
etcd-server1                               1/1     Running    5          7d16h
kube-apiserver-server1                     1/1     Running    5          7d16h
kube-controller-manager-server1            1/1     Running    5          7d16h
kube-proxy-4xlms                           1/1     Running    2          13h
kube-proxy-gx7jc                           1/1     Running    2          13h
kube-proxy-n58d5                           1/1     Running    2          13h
kube-scheduler-server1                     1/1     Running    5          7d16h
[kubeadm@server1 mainfest]$ kubectl get pod -n kube-system 
NAME                                       READY   STATUS            RESTARTS   AGE
calico-kube-controllers-76d4774d89-r2jg9   1/1     Running           0          2m18s
calico-node-8dvkh                          0/1     Running           0          2m18s
calico-node-l6kw6                          0/1     PodInitializing   0          2m18s
calico-node-lbqhr                          0/1     Running           0          2m18s
coredns-698fcc7d7c-n8j7q                   1/1     Running           5          7d16h
coredns-698fcc7d7c-r6tsw                   1/1     Running           5          7d16h
etcd-server1                               1/1     Running           5          7d16h
kube-apiserver-server1                     1/1     Running           5          7d16h
kube-controller-manager-server1            1/1     Running           5          7d16h
kube-proxy-4xlms                           1/1     Running           2          13h
kube-proxy-gx7jc                           1/1     Running           2          13h
kube-proxy-n58d5                           1/1     Running           2          13h
kube-scheduler-server1                     1/1     Running           5          7d16h
[kubeadm@server1 mainfest]$ kubectl get daemonsets.apps -n kube-system 
NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
calico-node   3         3         0       3            0           kubernetes.io/os=linux   2m45s
kube-proxy    3         3         3       3            3           kubernetes.io/os=linux   7d16h
[kubeadm@server1 mainfest]$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:bb:3e:1d brd ff:ff:ff:ff:ff:ff
    inet 192.168.43.11/24 brd 192.168.43.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 172.25.1.1/24 brd 172.25.1.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.3.201/24 brd 192.168.3.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 2408:84fb:1:1209:20c:29ff:febb:3e1d/64 scope global mngtmpaddr dynamic 
       valid_lft 3505sec preferred_lft 3505sec
    inet6 fe80::20c:29ff:febb:3e1d/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:e9:5e:cd:d0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 5a:94:ba:ba:c0:07 brd ff:ff:ff:ff:ff:ff
5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether 0a:6f:17:7d:e9:a8 brd ff:ff:ff:ff:ff:ff
    inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.102.1.239/32 brd 10.102.1.239 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
6: calic3d023fea71@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
7: cali8e3712e48ff@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
[kubeadm@server1 mainfest]$ 
[kubeadm@server1 mainfest]$ kubectl describe ippools
Name:         default-ipv4-ippool
Namespace:    
Labels:       <none>
Annotations:  projectcalico.org/metadata: {"uid":"e086a226-81ff-4cf9-923d-d5f75956a6f4","creationTimestamp":"2020-06-26T11:52:17Z"}
API Version:  crd.projectcalico.org/v1
Kind:         IPPool
Metadata:
  Creation Timestamp:  2020-06-26T11:52:18Z
  Generation:          1
  Managed Fields:
    API Version:  crd.projectcalico.org/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:projectcalico.org/metadata:
      f:spec:
        .:
        f:blockSize:
        f:cidr:
        f:ipipMode:
        f:natOutgoing:
        f:nodeSelector:
        f:vxlanMode:
    Manager:         Go-http-client
    Operation:       Update
    Time:            2020-06-26T11:52:18Z
  Resource Version:  276648
  Self Link:         /apis/crd.projectcalico.org/v1/ippools/default-ipv4-ippool
  UID:               d07545a4-008b-4000-96c0-b49db8ea8543
Spec:
  Block Size:     26
  Cidr:           10.244.0.0/16
  Ipip Mode:      Never
  Nat Outgoing:   true
  Node Selector:  all()
  Vxlan Mode:     Never
Events:           <none>

在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
查看路由環路:是直接通過ens33進行通信傳輸的在這裏插入圖片描述
更改calico.yml文件打開隧道模式並分配一個固定的pod ip網段

[kubeadm@server1 mainfest]$ vim calico.yaml
# Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"

 - name: CALICO_IPV4POOL_CIDR
               value: "192.168.0.0/16"
[kubeadm@server1 mainfest]$ kubectl apply -f calico.yaml 

在這裏插入圖片描述
在這裏插入圖片描述
可以看到各節點都獲取到了一個隧道ip:
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述


重新應用flannel網絡插件

[kubeadm@server1 mainfest]$ kubectl delete -f calico.yaml 
[kubeadm@server1 ~]$ cd /etc/cni/net.d/
[kubeadm@server1 net.d]$ sudo mv * /mnt
[kubeadm@server1 mainfest]$ kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章