Kubernetes+Docker+Calico集羣安裝配置

​​​​一、環境說明

操作系統 主機名 節點及功能 IP 備註
CentOS7.5 X86_64                                     

k8s-master                            

master/etcd/registry                                   192.168.168.2                             kube-apiserver、kube-controller-manager、kube-scheduler、etcd、docker、calico-image
CentOS7.5 X86_64                                           

work-node01                                            

node01/etcd 192.168.168.3                                     kube-proxy、kubelet、etcd、docker、calico
CentOS7.5 X86_64 work-node02 node02/etcd 192.168.168.4 kube-proxy、kubelet、etcd、docker、calico

                                                  


集羣功能各模塊功能描述:

Master節點:
Master節點上面主要由四個模塊組成,APIServer,schedule,controller-manager,etcd

APIServer: APIServer負責對外提供RESTful的kubernetes API的服務,它是系統管理指令的統一接口,任何對資源的增刪該查都要交給APIServer處理後再交給etcd,如圖,kubectl(kubernetes提供的客戶端工具,該工具內部是對kubernetes API的調用)是直接和APIServer交互的。

schedule: schedule負責調度Pod到合適的Node上,如果把scheduler看成一個黑匣子,那麼它的輸入是pod和由多個Node組成的列表,輸出是Pod和一個Node的綁定。 kubernetes目前提供了調度算法,同樣也保留了接口。用戶根據自己的需求定義自己的調度算法。

controller manager: 如果APIServer做的是前臺的工作的話,那麼controller manager就是負責後臺的。每一個資源都對應一個控制器。而control manager就是負責管理這些控制器的,比如我們通過APIServer創建了一個Pod,當這個Pod創建成功後,APIServer的任務就算完成了。

etcd:etcd是一個高可用的鍵值存儲系統,kubernetes使用它來存儲各個資源的狀態,從而實現了Restful的API。

Node節點:
每個Node節點主要由三個模板組成:kublet, kube-proxy

kube-proxy: 該模塊實現了kubernetes中的服務發現和反向代理功能。kube-proxy支持TCP和UDP連接轉發,默認基Round Robin算法將客戶端流量轉發到與service對應的一組後端pod。服務發現方面,kube-proxy使用etcd的watch機制監控集羣中service和endpoint對象數據的動態變化,並且維護一個service到endpoint的映射關係,從而保證了後端pod的IP變化不會對訪問者造成影響,另外,kube-proxy還支持session affinity。

kublet:kublet是Master在每個Node節點上面的agent,是Node節點上面最重要的模塊,它負責維護和管理該Node上的所有容器,但是如果容器不是通過kubernetes創建的,它並不會管理。本質上,它負責使Pod的運行狀態與期望的狀態一致。


二、3臺主機安裝前準備

1)更新軟件包和內核
yum -y update
2) 關閉防火牆 
systemctl disable firewalld.service
3) 關閉SELinux
vi /etc/selinux/config
改SELINUX=enforcing爲SELINUX=disabled
4)安裝常用
yum -y install net-tools ntpdate conntrack-tools
5)優化內核參數

net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_local_port_range = 30000 60999
net.netfilter.nf_conntrack_max = 26214400
net.netfilter.nf_conntrack_tcp_timeout_established = 86400
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 3600

三、修改三臺主機命名

1) k8s-master

hostnamectl --static set-hostname  k8s-master

2) work-node01

hostnamectl --static set-hostname  work-node01

3) work-node02

hostnamectl --static set-hostname  work-node02


四、製作CA證書

1.創建生成證書和存放證書目錄(3臺主機上都進行此操作)

mkdir /root/ssl
mkdir -p /opt/kubernetes/{conf,bin,ssl,yaml}

2.設置環境變量(3臺主機上都進行此操作)

vi /etc/profile.d/kubernetes.sh
K8S_HOME=/opt/kubernetes
export PATH=$K8S_HOME/bin/:$PATH
source /etc/profile.d/kubernetes.sh

3.安裝CFSSL並複製到node01號node02節點

cd /root/ssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl*
mv cfssl-certinfo_linux-amd64 /opt/kubernetes/bin/cfssl-certinfo
mv cfssljson_linux-amd64  /opt/kubernetes/bin/cfssljson
mv cfssl_linux-amd64  /opt/kubernetes/bin/cfssl
scp /opt/kubernetes/bin/cfssl* 192.168.168.3:/opt/kubernetes/bin
scp /opt/kubernetes/bin/cfssl* 192.168.168.4:/opt/kubernetes/bin

4.創建用來生成 CA 文件的 JSON 配置文件

cd /root/ssl
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF

server auth表示client可以用該ca對server提供的證書進行驗證

client auth表示server可以用該ca對client提供的證書進行驗證

5.創建用來生成 CA 證書籤名請求(CSR)的 JSON 配置文件

cat > ca-csr.json <<EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

6.生成CA證書和私鑰

# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
# ls ca*
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem

將SCP證書分發到各節點

# cp ca.csr ca.pem ca-key.pem ca-config.json /opt/kubernetes/ssl
# scp ca.csr ca.pem ca-key.pem ca-config.json [email protected]:/opt/kubernetes/ssl
# scp ca.csr ca.pem ca-key.pem ca-config.json [email protected]:/opt/kubernetes/ssl

7.創建etcd證書請求

# cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
  "127.0.0.1",
  "192.168.168.2",
  "192.168.168.3",
  "192.168.168.4",
  "k8s-master",
  "work-node01",
  "work-node02"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

8.生成 etcd 證書和私鑰

# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
# ls etc*
etcd.csr  etcd-key.pem  etcd.pem

分發證書文件

# cp /root/ssl/etcd*.pem /opt/kubernetes/ssl
# scp /root/ssl/etcd*.pem [email protected]/opt/kubernetes/ssl
# scp /root/ssl/etcd*.pem [email protected]/opt/kubernetes/ssl

五、Etcd集羣安裝配置(配置前3臺主機需時間同步)

1.修改hosts(3臺主機上都進行此操作)

vi /etc/hosts

# echo '192.168.168.2 k8s-master
192.168.168.3 work-node01
192.168.168.4 work-node02' >> /etc/hosts

2.下載etcd安裝包

wget https://github.com/coreos/etcd/releases/download/v3.3.7/etcd-v3.3.7-linux-amd64.tar.gz

3.解壓安裝etcd(3臺主機做同樣配置)

mkdir /var/lib/etcd
tar -zxvf etcd-v3.3.7-linux-amd64.tar.gz
cp etcd etcdctl /opt/kubernetes/bin

4.創建etcd啓動文件

cat > /usr/lib/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
WorkingDirectory=/var/lib/etcd
EnvironmentFile=/opt/kubernetes/conf/etcd.conf
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=1 /opt/kubernetes/bin/etcd"
Type=notify

[Install]
WantedBy=multi-user.target
EOF

將etcd.service文件分發到各node節點

scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/etcd.service
scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/etcd.service

5.k8s-master(192.168.168.2)編譯etcd.conf文件

vi /opt/kubernetes/conf/ectd.conf

#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/k8s-master.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379,http://127.0.0.1:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="k8s-master"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.168.2:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.168.2:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="k8s-master=https://192.168.168.2:2380,work-node01=https://192.168.168.3:2380,work-node02=https://192.168.168.4:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"

6.work-node01(192.168.168.3)編譯etcd.conf文件

vi /opt/kubernetes/conf/ectd.conf

#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/work-node01.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379,https://127.0.0.1:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="work-node01"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.168.3:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.168.3:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="k8s-master=https://192.168.168.2:2380,work-node01=https://192.168.168.3:2380,work-node02=https://192.168.168.4:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"

7.work-node02編譯etcd.conf文件

vi /opt/kubernetes/conf/ectd.conf

#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/work-node02.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379,https://127.0.0.1:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="work-node02"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.168.4:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.168.4:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="k8s-master=https://192.168.168.2:2380,work-node01=https://192.168.168.3:2380,work-node02=https://192.168.168.4:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"

8.各etcd節點啓動etcd並設置開機自動啓動

# systemctl daemon-reload
# systemctl enable etcd
# systemctl start etcd.service
# systemctl status etcd.service

9.各etcd節點測試驗證etcd集羣配置

# etcd --version        //查看etcd安裝版本
etcd Version: 3.3.7
Git SHA: 56536de55
Go Version: go1.9.6
Go OS/Arch: linux/amd64

查看etcd健康集羣狀態

# etcdctl --endpoints=https://192.168.168.2:2379 \
  --ca-file=/opt/kubernetes/ssl/ca.pem \
  --cert-file=/opt/kubernetes/ssl/etcd.pem \
  --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health       
member b1840b0a404e1103 is healthy: got healthy result from https://192.168.168.2:2379
member d15b66900329a12d is healthy: got healthy result from https://192.168.168.4:2379
member f9794412c46a9cb0 is healthy: got healthy result from https://192.168.168.3:2379
cluster is healthy

查看etcd集羣狀態

# etcdctl --endpoints=https://192.168.168.2:2379 \
--ca-file=/opt/kubernetes/ssl/ca.pem \
--cert-file=/opt/kubernetes/ssl/etcd.pem \
--key-file=/opt/kubernetes/ssl/etcd-key.pem member list
b1840b0a404e1103: name=k8s-master peerURLs=https://192.168.168.2:2380 clientURLs=https://192.168.168.2:2379 isLeader=false
d15b66900329a12d: name=work-node02 peerURLs=https://192.168.168.4:2380 clientURLs=https://192.168.168.4:2379 isLeader=true
f9794412c46a9cb0: name=work-node01 peerURLs=https://192.168.168.3:2380 clientURLs=https://192.168.168.3:2379 isLeader=false

六、3臺主機上安裝docker-engine

1.詳細步驟見“Oracle Linux7安裝Docker”

2.配置Docker連接ETCD集羣

設置docker的JSON文件

# vi /etc/docker/daemon.json 
{
  "bip": "172.17.0.1/24",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://wghlmi3i.mirror.aliyuncs.com"],
  "cluster-store": "etcd://192.168.168.2:2379,192.168.168.3:2379,192.168.168.4:2379"
}

注:各node節點docker bip地址不重複,如node01爲172.17.1.1/24,node02爲172.17.2.1/24

設置docer啓動文件

# vi /usr/lib/systemd/system/docker.service
將
ExecStart=/usr/bin/dockerd
改爲
ExecStart=/usr/bin/dockerd --tlsverify \
   --tlscacert=/opt/kubernetes/ssl/ca.pem \
   --tlscert=/opt/kubernetes/ssl/etcd.pem \
   --tlskey=/opt/kubernetes/ssl/etcd-key.pem -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
# systemctl daemon-reload
# systemctl enable docker.service
# systemctl start docker.service
# systemctl status docker.service

3.測試Docker TLS配置驗證情況


七、Kubernetes集羣安裝配置

1.下載Kubernetes源碼(本次使用k8s-1.10.4)

kubernetes-server-linux-amd64.tar.gz

kubernetes-node-linux-amd64.tar.gz

2.解壓Kubernets壓縮包,生成一個kubernetes目錄

tar -zxvf kubernetes-server-linux-amd64.tar.gz
tar -zxvf kubernetes-node-linux-amd64.tar.gz

3.配置k8s-master(192.168.168.2)

1)將k8s可執行文件拷貝至kubernets/bin目錄下

# cp -r /opt/software/kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kubeadm} /opt/kubernetes/bin/

# scp /opt/software/kubernetes/node/bin/{kubectl,kube-proxy,kubelet} [email protected]:/opt/kubernetes/bin/
# scp /opt/software/kubernetes/node/bin/{kubectl,kube-proxy,kubelet} [email protected]:/opt/kubernetes/bin/

2)創建生成K8S csr的JSON配置文件:

# cd /root/ssl
# cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.168.2",
    "192.168.168.3",
    "192.168.168.4",
    "10.1.0.1"
    "10.2.0.1",
    "localhost",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

注:10.1.0.1地址爲service-cluster網段中第一個ip,10.2.0.1地址爲cluster-cidr網段中第一個ip

3)在/root/ssl目錄下生成k8s證書和私鑰,並分發到各節點

# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

# cp kubernetes*.pem /opt/kubernetes/ssl/
# scp cp kubernetes*.pem [email protected]:/opt/kubernetes/ssl/
# scp cp kubernetes*.pem [email protected]:/opt/kubernetes/ssl/

4)創建生成admin證書csr的JSON配置文件

# cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

5)生成admin證書和私鑰

# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
# cp  admin*.pem /opt/kubernetes/ssl/

6)創建kube-apiserver使用的客戶端token文件

# mkdir /opt/kubernetes/token     //在各k8s節點執行相同步驟

# export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')    //生成一個程序登錄3rd_session
# cat > /opt/kubernetes/token/bootstrap-token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

7)創建基礎用戶名/密碼認證配置

# vi /opt/kubernetes/token/basic-auth.csv        //添加如下內容
admin,admin,1
readonly,readonly,2

8)創建Kube API Server啓動文件

# vi /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=/opt/kubernetes/conf/kube.conf
EnvironmentFile=/opt/kubernetes/conf/apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_ETCD_SERVERS \
        $KUBE_API_ADDRESS \
        $KUBE_API_PORT \
        $KUBELET_PORT \
        $KUBE_ALLOW_PRIV \
        $KUBE_SERVICE_ADDRESSES \
        $KUBE_ADMISSION_CONTROL \
        $KUBE_API_ARGS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
mkdir /var/log/kubernetes      //在各k8s節點執行相同步驟
mkdir /var/log/kubernetes/apiserver

9)創建kube.conf文件

# vi /opt/kubernetes/conf/kube.conf
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
#
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=2"                     //log級別
#
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"
#
# How the controller-manager, scheduler, and proxy find the apiserver
#KUBE_MASTER="--master=http://sz-pg-oam-docker-test-001.tendcloud.com:8080"
KUBE_MASTER="--master=http://127.0.0.1:8080"

注:該配置文件同時被kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy共用,在各node節點KUBE_MASTER值註銷

向各分發kube.conf文件

# scp kube.conf [email protected]:/opt/kubernetes/conf/
# scp kube.conf [email protected]:/opt/kubernetes/conf/

10)生成高級審計配置

cat > /opt/kubernetes/yaml/audit-policy.yaml <<EOF
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
- level: Metadata
EOF

11)創建kube API Server配置文件並啓動

# vi /opt/kubernetes/conf/apiserver.conf
###
## kubernetes system config
##
## The following values are used to configure the kube-apiserver
##
#
## The address on the local server to listen to.
#KUBE_API_ADDRESS="--insecure-bind-address=sz-pg-oam-docker-test-001.tendcloud.com"
KUBE_API_ADDRESS="--advertise-address=0.0.0.0 --bind-address=0.0.0.0 --insecure-bind-address=127.0.0.1"
#
## The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"
#
## Port minions listen on
#KUBELET_PORT="--kubelet-port=10250"
#
## Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.168.2:2379,https://192.168.168.3:2379,https://192.168.168.4:2379"
#
## Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.1.0.0/16"
#
## default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook"
#
## Add your own!
KUBE_API_ARGS="--authorization-mode=Node,RBAC \
  --runtime-config=rbac.authorization.k8s.io/v1 \
  --kubelet-https=true \
  --anonymous-auth=false \
  --enable-bootstrap-token-auth \
  --basic-auth-file=/opt/kubernetes/token/basic-auth.csv \
  --token-auth-file=/opt/kubernetes/token/bootstrap-token.csv \
  --service-node-port-range=30000-32767 \
  --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
  --client-ca-file=/opt/kubernetes/ssl/ca.pem \
  --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --etcd-cafile=/opt/kubernetes/ssl/ca.pem \
  --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \
  --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \
  --allow-privileged=true \
  --enable-swagger-ui=true \
  --apiserver-count=3 \
  --audit-policy-file=/opt/kubernetes/yaml/audit-policy.yaml \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kubernetes/apiserver/api-audit.log \
  --log-dir=/var/log/kubernetes/apiserver \
  --event-ttl=1h" \
  --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
  --requestheader-allowed-names=aggregator \
  --requestheader-extra-headers-prefix=X-Remote-Extra- \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --enable-aggregator-routing=true"
# systemctl daemon-reload
# systemctl enable kube-apiserver
# systemctl start kube-apiserver
# systemctl status kube-apiserver

11)創建Kube Controller Manager啓動文件

# vi /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/conf/kube.conf
EnvironmentFile=/opt/kubernetes/conf/controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
mkdir /var/log/kubernetes/controller-manager

13)創建kube Controller Manager配置文件並啓動

# vi /opt/kubernetes/conf/controller-manager.conf
###
# The following values are used to configure the kubernetes controller-manager
#
# defaults from config and apiserver should be adequate
#
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 \
  --service-cluster-ip-range=10.1.0.0/16 \
  --cluster-cidr=10.2.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --root-ca-file=/opt/kubernetes/ssl/ca.pem \
  --leader-elect=true \
  --log-dir=/var/log/kubernetes/controller-manager"

注:--service-cluster-ip-range參數指定Cluster中Service 的CIDR範圍,該網絡在各Node間必須路由不可達,必須和kube-apiserver中的參數一致,--cluster-cidr參數指定pod網段

# systemctl daemon-reload
# systemctl enable kube-controller-manager
# systemctl start kube-controller-manager
# systemctl status kube-controller-manager

14)創建Kube Scheduler啓動文件

# vi /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/conf/kube.conf
EnvironmentFile=/opt/kubernetes/conf/scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_SCHEDULER_ARGS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
mkdir /var/log/kubernetes/scheduler

15)創建Kube Scheduler配置文件並啓動

# vi /opt/kubernetes/conf/scheduler.conf
###
# kubernetes scheduler config
#
# default config should be adequate
#
# Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true \
  --address=127.0.0.1 \
  --log-dir=/var/log/kubernetes/scheduler"
# systemctl daemon-reload
# systemctl enable kube-scheduler
# systemctl start kube-scheduler
# systemctl status kube-scheduler

16)創建kubectl kubeconfig文件

設置集羣參數

# kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.168.2:6443
Cluster "kubernetes" set.Cluster "kubernetes" set.

設置客戶端認證參數

# kubectl config set-credentials admin --client-certificate=/opt/kubernetes/ssl/admin.pem --embed-certs=true --client-key=/opt/kubernetes/ssl/admin-key.pem
User "admin" set.

設置上下文參數

# kubectl config set-context kubernetes --cluster=kubernetes --user=admin
Context "kubernetes" created.

設置默認上下文

# kubectl config use-context kubernetes
Switched to context "kubernetes".

17)驗證各組件健康狀況

# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"} 

18)創建角色綁定

# kubectl create clusterrolebinding --user system:serviceaccount:kube-system:default kube-system-cluster-admin --clusterrole cluster-admin
clusterrolebindings.rbac.authorization.k8s.io "kube-system-cluster-admin"

注:在kubernetes-1.11.0以前使用如下命令

# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io "kubelet-bootstrap" created

19)創建kubelet bootstrapping kubeconfig文件

設置集羣參數

# kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.168.2:6443 --kubeconfig=bootstrap.kubeconfig
Cluster "kubernetes" set.

設置客戶端認證參數

# kubectl config set-credentials kubelet-bootstrap --token=1cd425206a373f7cc75c958fd363e3fe --kubeconfig=bootstrap.kubeconfig
User "kubelet-bootstrap" set.

token值爲master創建token文件時生成的128bit字符串

設置上下文參數

# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig
Context "default" created.

設置默認上下文

# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
Switched to context "default".

將生成的bootstrap.kubeconfig文件分發到各節點

cp bootstrap.kubeconfig /opt/kubernetes/conf/

scp bootstrap.kubeconfig [email protected]:/opt/kubernetes/conf/
scp bootstrap.kubeconfig [email protected]:/opt/kubernetes/conf/

20)創建生成kube-proxy證書csr的JSON配置文件

# cd /root/ssl
# cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

21)生成kube-proxy證書和私鑰

# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

# cp kube-proxy*.pem /opt/kubernetes/ssl/
# scp kube-proxy*.pem [email protected]:/opt/kubernetes/ssl/
# scp kube-proxy*.pem [email protected]:/opt/kubernetes/ssl/

21)創建kube-proxy kubeconfig文件

設置集羣參數

# kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.168.2:6443 --kubeconfig=kube-proxy.kubeconfig
Cluster "kubernetes" set.

設置客戶端認證參數

# kubectl config set-credentials kube-proxy --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.

設置上下文參數

# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
Context "default" created.

設置默認上下文

# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Switched to context "default".

分發kube-proxy.kubeconfig文件到各node節點

cp /root/ssl/kube-proxy.kubeconfig /opt/kubernetes/conf/
scp /root/ssl/kube-proxy.kubeconfig [email protected]:/opt/kubernetes/conf/
scp /root/ssl/kube-proxy.kubeconfig [email protected]:/opt/kubernetes/conf/

4.配置work-node01/02

1)安裝ipvsadm等工具(各node節點相同操作)

yum install -y ipvsadm ipset bridge-utils

2)創建kubelet工作目錄(各node節點做相同操作)

mkdir /var/lib/kubelet

3)創建kubelet配置文件

# vi /opt/kubernetes/conf/kubelet.conf
###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.168.3"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=work-node01"
#
## pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod_infra_container_image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
#
## 
#KUBELET_API_SERVER="--api-servers=https://192.168.168.2:6443"
#
## Add your own!
KUBELET_ARGS="--cluster-dns=10.1.0.2 \
  --cgroup-driver=systemd \
  --resolv-conf=/etc/resolv.conf \
  --experimental-bootstrap-kubeconfig=/opt/kubernetes/conf/bootstrap.kubeconfig \
  --kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig \
  --cert-dir=/opt/kubernetes/ssl \
  --network-plugin=cni \
  --cni-conf-dir=/etc/cni/net.d \
  --cni-bin-dir=/opt/cni/bin \
  --allow-privileged=true \
  --cluster-domain=cluster.local. \
  --hairpin-mode=promiscuous-bridge \
  --fail-swap-on=false \
  --serialize-image-pulls=false \
  --log-dir=/var/log/kubernetes/kubelet"

注:KUBELET_ADDRESS設置爲各節點本機IP,KUBELET_HOSTNAME設爲各節點的主機名,KUBELET_POD_INFRA_CONTAINER可設置爲私有容器倉庫地址,如有可設置爲KUBELET_POD_INFRA_CONTAINER="--pod_infra_container_image={私有鏡像倉庫ip}:80/k8s/pause-amd64:v3.0",cni-bin-dir值的路徑在創建calico網絡時會自動添加

mkdir /var/log/kubernetes/kubelet    //各node節點做相同操作

分發kubelet.conf到各node節點

scp /opt/kubernetes/conf/kubelet.conf [email protected]:/opt/kubernetes/conf/

4)創建CNI網絡配置文件

mkdir -p /etc/cni/net.d
cat >/etc/cni/net.d/10-calico.conf <<EOF
{
    "name": "calico-k8s-network",
    "cniVersion": "0.3.0",
    "type": "calico",
    "etcd_endpoints": "https://192.168.168.2:2379,https://192.168.168.3:2379,https://192.168.168.4:2379",
    "etcd_key_file": "/opt/kubernetes/ssl/etcd-key.pem",
    "etcd_cert_file": "/opt/kubernetes/ssl/etcd.pem",
    "etcd_ca_cert_file": "/opt/kubernetes/ssl/ca.pem",
    "log_level": "info",
    "mtu": 1500,
    "ipam": {
        "type": "calico-ipam"
    },
    "policy": {
        "type": "k8s"
    },
    "kubernetes": {
        "kubeconfig": "/opt/kubernetes/conf/kubelet.conf"
    }
}
EOF

分發到各nodes節點

scp /etc/cni/net.d/10-calico.conf [email protected]:/etc/cni/net.d

5)創建Kubelet配置文件並啓動

# vi /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=/opt/kubernetes/conf/kube.conf
EnvironmentFile=/opt/kubernetes/conf/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBELET_ADDRESS \
            $KUBELET_PORT \
            $KUBELET_HOSTNAME \
            $KUBE_ALLOW_PRIV \
            $KUBELET_POD_INFRA_CONTAINER \
            $KUBELET_ARGS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

分發kubelet.service文件到各node節點

scp /usr/lib/systemd/system/kubelet.service [email protected]:/usr/lib/systemd/system/
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

注:如無法自動在/opt/kubernetes/conf/沒有自動生成kubelet.kubeconfig文件可將master中$HOME/.kube/config文件重命名爲kubelet.kubeconfig並拷貝至各nodes節點的/opt/kubernetes/conf/目錄下

6)查看CSR證書請求(在k8s-master上執行)

kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-0Vg_d__0vYzmrMn7o2S7jsek4xuQJ2v_YuCKwWN9n7M   4h        kubelet-bootstrap   Pending

7)批准kubelet 的 TLS 證書請求(在k8s-master上執行)

# kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve
certificatesigningrequest.certificates.k8s.io "node-csr-0Vg_d__0vYzmrMn7o2S7jsek4xuQJ2v_YuCKwWN9n7M" approved

8)查看節點狀態如果是Ready的狀態就說明一切正常(在k8s-master上執行)

# kubectl get node
NAME          STATUS    ROLES     AGE       VERSION
work-node01   Ready     <none>    11h       v1.10.4
work-node02   Ready     <none>    11h       v1.10.4

9)創建kube-proxy工作目錄(各node節點做相同操作)

mkdir /var/lib/kube-proxy

10)創建kube-proxy的啓動文件

# vi /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
EnvironmentFile=/opt/kubernetes/conf/kube.conf
EnvironmentFile=/opt/kubernetes/conf/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_PROXY_ARGS
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

分發kube-proxy.service到各node節點

scp /usr/lib/systemd/system/kube-proxy.service [email protected]:/usr/lib/systemd/system/

11)創建kube-proxy配置文件並啓動

# vi /opt/kubernetes/conf/kube-proxy.conf
###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.168.3 \
  --hostname-override=work-node01 \
  --kubeconfig=/opt/kubernetes/conf/kube-proxy.kubeconfig \
  --feature-gates=SupportIPVSProxyMode=true \
  --proxy-mode=ipvs \
  --ipvs-min-sync-period=5s \
  --ipvs-sync-period=5s \
  --ipvs-scheduler=rr \
  --cluster-cidr=10.2.0.0/16 \
  --log-dir=/var/log/kubernetes/kube-proxy"

注:bind-address值設爲各node節點本機IP,hostname-override值設爲各node節點主機名

分發kube-proxy.conf到各node節點

scp /opt/kubernetes/conf/kube-proxy.conf [email protected]:/opt/kubernetes/conf/
# systemctl daemon-reload
# systemctl enable kube-proxy
# systemctl start kube-proxy
# systemctl status kube-proxy   

12)查看LVS狀態

# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.0.1:443 rr persistent 10800
  -> 192.168.168.2:6443           Masq    1      0          0      

5.配置calico網絡

1)下載calico插件

在各主機執行:

# wget -N -P /usr/bin/ https://github.com/projectcalico/calicoctl/releases/download/v3.1.3/calicoctl
# chmod +x /usr/bin/calicoctl
# mkdir -p /etc/calico/{conf,yaml}
# mkdir -p /opt/cni/bin
# wget -N -P /opt/cni/bin https://github.com/projectcalico/cni-plugin/releases/download/v3.1.3/calico
# wget -N -P /opt/cni/bin https://github.com/projectcalico/cni-plugin/releases/download/v3.1.3/calico-ipam
# chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam
# docker pull quay.io/calico/node:v3.1.3
# docker pull quay.io/calico/kube-controllers:v3.1.3
# docekr pull quay.io/calico/cni:v3.1.3

查看docker calico image信息

# docker images
REPOSITORY                        TAG                 IMAGE ID            CREATED             SIZE
quay.io/calico/node               v3.1.3              7eca10056c8e        7 weeks ago         248MB
quay.io/calico/kube-controllers   v3.1.3              240a82836573        7 weeks ago         55MB
quay.io/calico/cni                v3.1.3              9f355e076ea7        7 weeks ago         68.8MB

2創建calico和etcd交互文件

# vi /etc/calico/calicoctl.cfg
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  datastoreType: "etcdv3"
  etcdEndpoints: "https://192.168.168.2:2379,https://192.168.168.3:2379,https://192.168.168.4:2379"
  etcdKeyFile: "/opt/kubernetes/ssl/etcd-key.pem"
  etcdCertFile: "/opt/kubernetes/ssl/etcd.pem"
  etcdCACertFile: "/opt/kubernetes/ssl/ca.pem"

3)獲取calico.yaml(在master執行)

wget -N -P /etc/calico/yaml https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/rbac.yaml
wget -N -P /etc/calico/yaml https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/calico.yaml

注:此處版本與docker calico image版本相同

修改calico.yaml文件內容

# cd /etc/calico/yaml
### 替換 Etcd 地址
sed -i 's@.*etcd_endpoints:.*@\ \ etcd_endpoints:\ \"https://192.168.168.2:2379,https://192.168.168.3:2379,https://192.168.168.4:2379\"@gi' calico.yaml

### 替換 Etcd 證書
export ETCD_CERT=`cat /opt/kubernetes/ssl/etcd.pem | base64 | tr -d '\n'`
export ETCD_KEY=`cat /opt/kubernetes/ssl/etcd-key.pem | base64 | tr -d '\n'`
export ETCD_CA=`cat /opt/kubernetes/ssl/ca.pem | base64 | tr -d '\n'`

sed -i "s@.*etcd-cert:.*@\ \ etcd-cert:\ ${ETCD_CERT}@gi" calico.yaml
sed -i "s@.*etcd-key:.*@\ \ etcd-key:\ ${ETCD_KEY}@gi" calico.yaml
sed -i "s@.*etcd-ca:.*@\ \ etcd-ca:\ ${ETCD_CA}@gi" calico.yaml

sed -i 's@.*etcd_ca:.*@\ \ etcd_ca:\ "/calico-secrets/etcd-ca"@gi' calico.yaml
sed -i 's@.*etcd_cert:.*@\ \ etcd_cert:\ "/calico-secrets/etcd-cert"@gi' calico.yaml
sed -i 's@.*etcd_key:.*@\ \ etcd_key:\ "/calico-secrets/etcd-key"@gi' calico.yaml

### 替換 IPPOOL 地址
sed -i 's/192.168.0.0/10.2.0.0/g' calico.yaml

calico資源進行配置

kubectl apply -f /etc/calico/yaml/rbac.yaml -n kube-system
kubectl create -f /etc/calico/yaml/calico.yaml -n kube-system

4)創建calico配置文件

# vi /etc/calico/conf/calico.conf
CALICO_NODENAME=""
ETCD_ENDPOINTS=https://192.168.168.2:2379,https://192.168.168.3:2379,https://192.168.168.4:2379
ETCD_CA_CERT_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
CALICO_IP=""
CALICO_IP6=""
CALICO_AS="65142"
CALICO_LIBNETWORK_ENABLED=true
CALICO_NETWORKING_BACKEND=bird
FELIX_IPV6SUPPORT=false
FELIX_DEFAULTENDPOINTTOHOSTACTION=ACCEPT
FELIX_LOGSEVERITYSCREEN=info

注:CALICO_AS值是同一IDC內設置爲同一個AS號

分發calico.conf文件到各node節點

scp /etc/calico/conf/calico.conf [email protected]:/etc/calico/conf/
scp /etc/calico/conf/calico.conf [email protected]:/etc/calico/conf/

5)創建calico啓動文件

# vi /usr/lib/systemd/system/calico-node.service
[Unit]
Description=calico-node
After=docker.service
Requires=docker.service

[Service]
User=root
PermissionsStartOnly=true
EnvironmentFile=/etc/calico/conf/calico.conf
ExecStart=/usr/bin/docker run --net=host --privileged --name=calico-node \
 -e NODENAME=${CALICO_NODENAME} \
 -e ETCD_ENDPOINTS=${ETCD_ENDPOINTS} \
 -e ETCD_CA_CERT_FILE=${ETCD_CA_CERT_FILE} \
 -e ETCD_CERT_FILE=${ETCD_CERT_FILE} \
 -e ETCD_KEY_FILE=${ETCD_KEY_FILE} \
 -e IP=${CALICO_IP} \
 -e IP6=${CALICO_IP6} \
 -e AS=${CALICO_AS} \
 -e CALICO_LIBNETWORK_ENABLED=${CALICO_LIBNETWORK_ENABLED} \
 -e CALICO_NETWORKING_BACKEND=${CALICO_NETWORKING_BACKEND} \
 -e FELIX_IPV6SUPPORT=${FELIX_IPV6SUPPORT} \
 -e FELIX_DEFAULTENDPOINTTOHOSTACTION=${FELIX_DEFAULTENDPOINTTOHOSTACTION} \
 -e FELIX_LOGSEVERITYSCREEN=${FELIX_LOGSEVERITYSCREEN} \
 -v /opt/kubernetes/ssl/ca.pem:/opt/kubernetes/ssl/ca.pem \
 -v /opt/kubernetes/ssl/etcd.pem:/opt/kubernetes/ssl/etcd.pem \
 -v /opt/kubernetes/ssl/etcd-key.pem:/opt/kubernetes/ssl/etcd-key.pem \
 -v /run/docker/plugins:/run/docker/plugins \
 -v /lib/modules:/lib/modules \
 -v /var/run/calico:/var/run/calico \
 -v /var/log/calico:/var/log/calico \
 -v /var/lib/calico:/var/lib/calico \
 quay.io/calico/node:v3.1.3

ExecStop=/usr/bin/docker rm -f calico-node
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
mkdir /var/log/calico     //各主機同樣操作
mkdir /var/lib/calico 

注:NODENAME值爲各主機名,IP值爲各主機外連網口IP

分發calico-node文件到各node節點

scp /usr/lib/systemd/system/calico-node.service [email protected]:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/calico-node.service [email protected]:/usr/lib/systemd/system/
# systemctl daemon-reload
# systemctl enable calico-node
# systemctl start calico-node
# systemctl status calico-node   

6)啓動各主機上的calico

master主機:

calicoctl node run --node-image=quay.io/calico/node:v3.1.3 --ip=192.168.168.2

node主機:

calicoctl node run --node-image=quay.io/calico/node:v3.1.3 --ip=192.168.168.3
calicoctl node run --node-image=quay.io/calico/node:v3.1.3 --ip=192.168.168.4

7)查看calico node信息

# # calicoctl get node -o wide
NAME         ASN         IPV4               IPV6   
k8s-master    65412      192.168.168.2/32          
work-node01   65412      192.168.168.3/24          
wrok-node02   65412      192.168.168.4/24          

8)查看peer信息

# calicoctl node status
Calico process is running.

IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+---------------+-------------------+-------+----------+-------------+
| 192.168.168.2 | node-to-node mesh | up    | 07:41:40 | Established |
| 192.168.168.3 | node-to-node mesh | up    | 07:41:43 | Established |
+---------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

9)查看創建的IPPOOL

# calicoctl get ippool -o wide
NAME                  CIDR                  NAT     IPIPMODE   DISABLED   
default-ipv4-ippool   10.2.0.0/16           true    Always     false 
default-ipv6-ippool   fd93:317a:e57d::/48   false   Never      false      

10)創建docker network網段(關閉IP-in-IP)

calicoctl apply -f - << EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: ippool-docker-01
spec:
  cidr: 172.17.0.0/16
  ipipMode: Never
  natOutgoing: true
EOF
# calicoctl get ippool -o wide
NAME                  CIDR                  NAT     IPIPMODE   DISABLED   
default-ipv6-ippool   fd93:317a:e57d::/48   false   Never      false      
ippool-docker         10.2.0.0/16           true    Always     false      
ippool-docker-172     172.17.0.0/16         true    Never      false 

查看路由表

# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.168.1   0.0.0.0         UG    100    0        0 ens34
10.2.178.64     192.168.168.3   255.255.255.192 UG    0      0        0 tunl0
10.2.233.128    192.168.168.4   255.255.255.192 UG    0      0        0 tunl0
10.2.235.192    0.0.0.0         255.255.255.192 U     0      0        0 *
172.17.0.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
172.17.1.0      192.168.168.3   255.255.255.0   UG    0      0        0 ens34
172.17.2.0      192.168.168.4   255.255.255.0   UG    0      0        0 ens34
192.168.168.0   0.0.0.0         255.255.255.0   U     101    0        0 ens34

6.部署kubernetes DNS(在master執行)

1)下載kubernetes.tar.gz程序文件

# cd /opt/software
# tar -zxvf kubernetes.tar.gz
# mv /opt/software/kubernetes/cluster/addons/dns/coredns.yaml.base /opt/kubernetes/yaml/coredns.yaml

2)修改coredns.yaml配置文件

# vi /opt/kubernetes/yaml/coredns.yaml
將配置文件coredns.yaml中,修改如下兩個地方爲自己的domain和cluster ip地址.
1.kubernetes __PILLAR__DNS__DOMAIN__
改爲 kubernetes cluster.local.

2.clusterIP: __PILLAR__DNS__SERVER__
改爲:
clusterIP: 10.1.0.2

3)創建coreDNS

# kubectl create -f coredns.yaml
serviceaccount "coredns" created
clusterrole.rbac.authorization.k8s.io "system:coredns" created
clusterrolebinding.rbac.authorization.k8s.io "system:coredns" created
configmap "coredns" created
deployment.extensions "coredns" created
service "coredns" created

4)查看coreDNS服務狀態

# kubectl get pod -n kube-system -o wide
NAME                                     READY     STATUS    RESTARTS   AGE       IP             NODE
calico-kube-controllers-98989846-5b2kv   1/1       Running   2          4d        10.10.10.213   node10-213
calico-node-bcv9m                        2/2       Running   27         7d        10.10.10.214   node10-214
calico-node-jmr72                        2/2       Running   18         7d        10.10.10.215   node10-215
calico-node-plxsb                        2/2       Running   14         7d        10.10.10.213   node10-213
calico-node-wzthq                        2/2       Running   14         7d        10.10.10.212   node10-212
coredns-77c989547b-46vp6                 1/1       Running   4          2d        10.2.12.66     node10-215
coredns-77c989547b-qz4zp                 1/1       Running   0          2d        10.2.191.130   node10-212
# kubectl get svc --all-namespaces
NAMESPACE     NAME                   TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
default       kubernetes             ClusterIP   10.1.0.1     <none>        443/TCP         11d
kube-system   coredns                ClusterIP   10.1.0.2     <none>        53/UDP,53/TCP   2d

5)coreDNS解析測試

# kubectl run -i --tty busybox --image=docker.io/busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # nslookup www.baidu.com
Server:         10.1.0.2
Address:        10.1.0.2:53

Non-authoritative answer:
www.baidu.com   canonical name = www.a.shifen.com
Name:   www.a.shifen.com
Address: 220.181.112.244
Name:   www.a.shifen.com
Address: 220.181.111.188

*** Can't find www.baidu.com: No answer

7.部署kubernetes Dashboard(在master執行)

1)在master上安裝git程序包,並下載Dashboard程序

# yum -y install git
# cd /opt/kubernetes/yaml
# git clone https://github.com/unixhot/salt-kubernetes.git

2)創建dashboard服務

# cd /opt/kubernetes/yaml/salt-kubernetes/addons
# kubectl create -f dashboard/
serviceaccount "admin-user" created
clusterrolebinding.rbac.authorization.k8s.io "admin-user" created
secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created
deployment.apps "kubernetes-dashboard" created
service "kubernetes-dashboard" created
clusterrole.rbac.authorization.k8s.io "ui-admin" created
rolebinding.rbac.authorization.k8s.io "ui-admin-binding" created
clusterrole.rbac.authorization.k8s.io "ui-read" created
rolebinding.rbac.authorization.k8s.io "ui-read-binding" created

# kubectl cluster-info
Kubernetes master is running at https://192.168.168.2:6443
kubernetes-dashboard is running at https://192.168.168.2:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

3)查看狀態

查看service狀態:

# kubectl get svc -o wide --all-namespaces
NAMESPACE     NAME                   TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE       SELECTOR
default       kubernetes             ClusterIP   10.1.0.1     <none>        443/TCP         3d        <none>
kube-system   kubernetes-dashboard   NodePort    10.1.56.6    <none>        443:31944/TCP   51s       k8s-app=kubernetes-dashboard

注:PORT(S)值中31944爲dashboard訪問端口

4)使用node節點IP訪問dashboard(http://192.168.168.3:31944),選擇“令牌”登錄方式後先不要點擊“確定”

5)在master上生成token令牌

# kubectl cluster-info
Kubernetes master is running at https://192.168.168.2:6443
kubernetes-dashboard is running at https://192.168.168.2:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-master addons]# kubectl get svc -o wide --all-namespaces
NAMESPACE     NAME                   TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE       SELECTOR
default       kubernetes             ClusterIP   10.1.0.1     <none>        443/TCP         3d        <none>
kube-system   kubernetes-dashboard   NodePort    10.1.56.6    <none>        443:31944/TCP   51s       k8s-app=kubernetes-dashboard
[root@k8s-master addons]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-vphg6
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=admin-user
              kubernetes.io/service-account.uid=c010fcf0-8e8b-11e8-b69f-000c29199c1e

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1359 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXZwaGc2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjMDEwZmNmMC04ZThiLTExZTgtYjY5Zi0wMDBjMjkxOTljMWUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.xQWSsLRyaEAB4EFFSPe2OsskcRJSEFBfSa7vKllWF_vYKG1fvpVlx1OT__V_oZ-J8hCoOyPOR8PZUGSJJnnJxuBC5tx9Hpti-TKPZRw6EfBqAP3Z3LP8zcMuCZSIaoVHJTbCokkJ0xGh1e0haS5VqRRiNbPWhvU9b4rX71050Sqdwguz9GrqX75Y03A06hCe5Sh1IWW1Qz_8rsyaONNVwdE5P0C-EkloBOxFekcbTaYGpST3bVVu1m7Wxg_SgF9ZDqFm7mdBCyiO-fc_Cy4ETYaGd8JO-OBbTs71rt8FK6nFMIete95-HpIfFvR9YdLhlvzuARe0t8kpnkRX2VSetQ

注:將生成的token字符串複製到上圖的“輸入領牌”欄中,然後點擊“確定 ”

6)Dashboard登錄頁面

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章