k8s安裝之網絡

NetWork

Fannel
###flannel安裝
yum install flannel -y
####啓動命令

/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start \
        -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \
        -etcd-prefix=${FLANNEL_ETCD_PREFIX} \
        $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
WantedBy=docker.service
####配置文件

cat /etc/sysconfig/flanneld
# Flanneld configuration options  
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://172.16.20.206:2379,https://172.16.20.207:2379,https://172.16.20.208:2379"
# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"
#####說明
etcd的地址FLANNEL_ETCD_ENDPOINT
etcd查詢的目錄,包含docker的IP地址段配置。FLANNEL_ETCD_PREFIX
#####如果是多網卡(例如vagrant環境),則需要在FLANNEL_OPTIONS中增加指定的外網出口的網卡,例如iface=eth2
####在etcd中創建網絡配置
#####只需要在master中執行一次

etcdctl --endpoints=https://172.16.20.206:2379,https://172.16.20.207:2379,https://172.16.20.208:2379 \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  mkdir /kube-centos/network

etcdctl --endpoints=https://172.16.20.206:2379,https://172.16.20.207:2379,https://172.16.20.208:2379  \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  mk /kube-centos/network/config  '{"Network":"10.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'

######驗證flannel

/usr/local/bin/etcdctl \
--endpoints=https://172.16.20.206:2379,https://172.16.20.207:2379,https://172.16.20.208:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
ls /kube-centos/network/config
######刪除etcd中docker網絡配置信息

/usr/local/bin/etcdctl \
--endpoints=https://172.16.20.206:2379,https://172.16.20.207:2379,https://172.16.20.208:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
rm /kube-centos/network/config
Calico

在kubelet的啓動服務文件中增加設置如下
kubelet 中的dns地址是cluster的IP也就是服務service的IP並不是pod的IP,要在--service-cluster-ip-range 這個IP段裏
--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin
Kubernetes上執行rbac.yaml授權文件,然後執行calico.yaml文件進行部署.
Calico把每個操作系統的協議棧認爲是一個路由器,然後把所有的容器認爲是連在這個路由器上的網絡終端,在路由器之間跑標準的路由協議——BGP的協議,然後讓它們自己去學習這個網絡拓撲該如何轉發。所以Calico方案其實是一個純三層的方案,也就是說讓每臺機器的協議棧的三層去確保兩個容器,跨主機容器之間的三層連通。
Calico不使用重疊網絡比如flannel和libnetwork重疊網絡驅動,它是一個純三層的方法,使用虛擬路由代替虛擬交換,每一臺虛擬路由通過BGP協議傳播可達信息(路由)到剩餘數據中心。
Calico在每一個計算節點利用Linux Kernel實現了一個高效的vRouter來負責數據轉發,而每個vRouter通過BGP協議負責把自己上運行的workload的路由信息像整個Calico網絡內傳播——小規模部署可以直接互聯,大規模下可通過指定的BGP route reflector來完成。

calico# vim calico.yaml
data:
  # Configure this with the location of your etcd cluster.
  etcd_endpoints: "https://10.3.1.15:2379,https://10.3.1.16:2379,https://10.3.1.17:2379"

  # If you're using TLS enabled etcd uncomment the following.
  # You must also populate the Secret below with these files.  
  etcd_ca: "/calico-secrets/etcd-ca"   #取消原來的註釋即可
  etcd_cert: "/calico-secrets/etcd-cert"
  etcd_key: "/calico-secrets/etcd-key"

apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: calico-etcd-secrets
  namespace: kube-system
data:  
etcd-key: (cat /etc/kubernetes/ssl/etcd-key.pem | base64 | tr -d '\n') #將輸出結果填寫在這裏
  etcd-cert: (cat /etc/kubernetes/ssl/etcd.pem | base64 | tr -d '\n') #將輸出結果填寫在這裏
  etcd-ca: (cat /etc/kubernetes/ssl/ca.pem | base64 | tr -d '\n') #將輸出結果填寫在這裏
   #如果etcd沒用啓用tls則爲null
  #上面是必須要修改的參數,文件中有一個參數是設置pod network地址的,根據實際情況做修改:
   - name: CALICO_IPV4POOL_CIDR
     value: "192.168.0.0/16"
calico-node服務的主要參數:
         CALICO_IPV4POOL_CIDR: Calico IPAM的IP地址池,Pod的IP地址將從該池中進行分配。
         CALICO_IPV4POOL_IPIP:是否啓用IPIP模式,啓用IPIP模式時,Calico將在node上創建一個tunl0的虛擬隧道。
         FELIX_LOGSEVERITYSCREEN: 日誌級別。
         FELIX_IPV6SUPPORT : 是否啓用IPV6。
      IP Pool可以使用兩種模式:BGP或IPIP。使用IPIP模式時,設置 CALICO_IPV4POOL_IPIP="always",不使用IPIP模式時,設置爲"off",此時將使用BGP模式。
      IPIP是一種將各Node的路由之間做一個tunnel,再把兩個網絡連接起來的模式,啓用IPIP模式時,Calico將在各Node上創建一個名爲"tunl0"的虛擬網絡接口。

使用bgp模式
- name: CALICO_IPV4POOL_IPIP      #ipip模式關閉
value: "never"
- name: FELIX_IPINIPENABLED       #felix關閉ipip
value: "false"

官網建議:

生產環境,node數量在50以內
typha_service_name: "none"
  replicas: 0
node數量爲:100-200,
In the ConfigMap named calico-config, locate the typha_service_name, delete the none value, and replace it with calico-typha.
Modify the replica count in theDeployment named calico-typha to the desired number of replicas.
typha_service_name: "calico-typha"
  replicas: 3
node數量每增加200個實例:
We recommend at least one replica for every 200 nodes and no more than 20 replicas. In production, we recommend a minimum of three replicas to reduce the impact of rolling upgrades and failures.
我們建議每200個節點至少複製一個副本,不超過20個副本。 在生產中,我們建議至少使用三個副本來減少滾動升級和故障的影響。
Warning: If you set typha_service_name without increasing the replica count from its default of 0 Felix will try to connect to Typha, find no Typha instances to connect to, and fail to start.
警告:如果設置typha_service_name而不將副本計數從默認值0增加.Felix將嘗試連接到Typha,找不到要連接的Typha實例,並且無法啓動
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章