寫在前面:
K8S集羣的部署需要通過手動部署多練習,以便於深入的瞭解他的架構和問題排查,個人建議每部署一個服務做一個快照,有助於反覆折騰某一個可能不理解或者可能有問題的服務.
K8s架構圖
Kubernetes主要由以下幾個核心組件組成:
- etcd保存了整個集羣的狀態;
- apiserver提供了資源操作的唯一入口,並提供認證、授權、訪問控制、API註冊和發現等機制;
- controller manager負責維護集羣的狀態,比如故障檢測、自動擴展、滾動更新等;
- scheduler負責資源的調度,按照預定的調度策略將Pod調度到相應的機器上;
- kubelet負責維護容器的生命週期,同時也負責Volume(CSI)和網絡(CNI)的管理;
- Container runtime負責鏡像管理以及Pod和容器的真正運行(CRI);
- kube-proxy負責爲Service提供cluster內部的服務發現和負載均衡;
除了核心組件,還有一些推薦的插件,其中有的已經成爲CNCF中的託管項目:
- CoreDNS負責爲整個集羣提供DNS服務
- Ingress Controller爲服務提供外網入口
- Prometheus提供資源監控
- Dashboard提供GUI
- Federation提供跨可用區的集羣
一 環境
1 服務器信息
1 設置網絡
cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=status
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=db51cbf7-767a-4e16-9d1b-3453caeddf49
DEVICE=ens33
ONBOOT=yes
IPADDR=172.24.124.222
NETMASK=255.255.255.0
GATEWAY=172.24.124.1
根據自己的網絡配置
2 設置主機名
hostnamectl set-hostname k8scluster
3 關閉iptables 先將IPtables關掉,後面根據業務啓用iptables並放行相應的IP
systemctl stop firewalld
systemctl disable firewalld
4 關閉 selinux
vi /etc/selinux/config
SELINUX=disabled
5 關閉swap分區
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab
6 設置時間同步
yum install ntpdate
ntpdate time.windows.com
7 設置文件描述符
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536" >> /etc/security/limits.conf
echo "* hard nproc 65536" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
8 配置docker源和安裝docker
cd /etc/yum.repos.d
wget https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce
mkdir /etc/docker
vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "3"
}
}
systemctl start docker
systemctl status docker
systemctl enable docker
9 配置hosts解析
/etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.24.124.222 k8scluster
10 創建集羣相關目錄
mkdir -p /opt/etcd/{bin,cfg,ssl} 運行etcd節點需要創建
mkdir -p /opt/kubernetes/{logs,bin,cfg,ssl} 所有集羣節點都需要創建
mkdir /data/ssl -p 證書製作目錄
mkdir /data/soft 初始軟件包下載目錄
11 安裝常用工具
yum install -y vim curl wget lrzsz
2 內核調優與內核升級
1 內核升級爲最新穩定版
1) 檢查安裝內核依賴包
[ ! -f /usr/bin/perl ] && yum install perl -y
2) 配置 elrepo 源
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
3) 安裝維護的內核
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available --showduplicates | grep -Po '^kernel-lt.x86_64\s+\K\S+(?=.el7)'
yum --disablerepo="*" --enablerepo=elrepo-kernel install -y kernel-lt{,-devel}
4) 修改內核啓動順序
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
5) 查看已安裝的內核版本
grubby --default-kernel
6) docker官方的內核檢查腳本建議(RHEL7/CentOS7: User namespaces disabled; add ‘user_namespace.enable=1’ to boot command line),使用下面命令開啓
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
7) 重啓服務器
reboot
8) 檢查默認內核啓動版本
# uname -a
Linux master-1 4.4.229-1.el7.elrepo.x86_64 #1 SMP Wed Jul 1 10:43:08 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
2 內核參數調優
vim /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
vm.swappiness = 0
net.ipv4.neigh.default.gc_stale_time=120
net.ipv4.ip_forward = 1
# see details in https://help.aliyun.com/knowledge_detail/39428.html
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_announce=2
# see details in https://help.aliyun.com/knowledge_detail/41334.html
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
kernel.sysrq = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
保存退出後應用參數
modprobe br_netfilter
sysctl -p
二 安裝 cfssl
1 下載安裝 cfssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
作用: 由於kubernetes集羣爲了安全使用TLS通信,所以需要製作私有證書
三 創建根證書
1 創建 集羣根ca,etcd和kubernetes都基於此ca生成證書
cd /data/ssl
1) 創建集羣 ca.json
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
EOF
2) 創建ca.csr
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "ShenZhen",
"L": "ShenZhen",
"O": "k8s",
"OU": "System"
}
]
}
EOF
3) 生成ca證書與ca-key
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
4)拷貝到到指定的 ssl目錄和其他節點指定目錄
cp /data/ssl/ca* /opt/kubernetes/ssl/
注意設置過期時間: expiry 如果是生產環境儘可能長
四 部署單節點ETCD
1 下載二進制包
cd /data/soft
wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
tar xf etcd-v3.4.9-linux-amd64.tar.g
2 拷貝軟件包到指定運行目錄
cp /data/soft/etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
3 創建ETCD 證書文件
1 編輯 csr文件
cd /data/ssl/
cat > /data/ssl/etcd-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"172.24.124.222"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
hosts 需要按照規劃的etcd集羣IP進行填寫,最好是多規劃幾個可用IP,在etcd擴容的時候非常方便,且etcd集羣爲奇數個,標準爲三個或者5個.
2 生成證書
cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
-ca-key=/opt/kubernetes/ssl/ca-key.pem \
-config=/opt/kubernetes/ssl/ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
3 拷貝證書到節點指定目錄
cp /data/ssl/etcd*.pem /opt/etcd/ssl/
4 設置ETCD啓動文件
cat /lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
ExecStart=/opt/etcd/bin/etcd \
--name etcd-1 \
--cert-file=/opt/etcd/ssl/etcd.pem \
--key-file=/opt/etcd/ssl/etcd-key.pem \
--peer-cert-file=/opt/etcd/ssl/etcd.pem \
--peer-key-file=/opt/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/opt/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem \
--peer-client-cert-auth \
--client-cert-auth \
--initial-advertise-peer-urls https://172.24.124.222:2380 \
--listen-peer-urls https://172.24.124.222:2380 \
--listen-client-urls https://172.24.124.222:2379,https://127.0.0.1:2379 \
--advertise-client-urls https://172.24.124.222:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster-state=new \
--data-dir=/var/lib/etcd \
--listen-metrics-urls=http://0.0.0.0:2381 \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
RestartSec=5
[Install]
WantedBy=multi-user.target
5 啓動ETCD和狀態查看
1 創建 etcd工作目錄
mkdir -p /var/lib/etcd
2 啓動
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
2 查看 成員
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/kubernetes/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints="https://172.24.124.222:2379" member list
3 查看健康狀態
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/kubernetes/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints="https://172.24.124.222:2379" endpoint status
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/kubernetes/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints="https://172.24.124.222:2379" endpoint health
4 查看暴露的metrics
curl http://10.0.2.225:2381/metrics
五 部署kube-apiserver
1 下載server軟件包
cd /data/soft/
wget https://dl.k8s.io/v1.18.3/kubernetes-server-linux-amd64.tar.gz
tar xf kubernetes-server-linux-amd64.tar.gz
刪除多餘文件:
cd kubernetes/server/bin
rm -rf *_tag
rm -rf *.tar
rm -rf apiextensions-apiserver kubeadm mounter
拷貝文件到相關節點目錄
cp /data/soft/kubernetes/server/bin/* /opt/kubernetes/bin/
2 生成kube-apiserver所需證書文件
1)添加 apiserver csr文件
cd /data/ssl/
cat > /data/ssl/kubernetes-csr.json << EOF
{
"CN": "kubernetes",
"hosts": [
"10.30.0.1",
"127.0.0.1",
"172.24.124.222",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
注:IP 地址爲所有節點IP,和apiserver高可用IP,建議在規劃kubernete集羣時預留一定數量的IP並且全部寫入到csr中,這樣可以在擴容的時候容易,如果沒有預留IP那麼集羣擴容需要重新生成證書並替換。
10.30.0.1 集羣VIP地址網關
172.24.124.222 apiserver地址
2)生成證書文件
cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
-ca-key=/opt/kubernetes/ssl/ca-key.pem \
-config=/opt/kubernetes/ssl/ca-config.json \
-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
3)拷貝到節點相應目錄:
cp /data/ssl/kubernetes*.pem /opt/kubernetes/ssl/
注:僅master節點需要此文件
3 生成集羣驗證token
# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
bf20cc75c3d0fb71c58e0039750a3b7b
cat > /opt/kubernetes/ssl/bootstrap-token.csv << EOF
bf20cc75c3d0fb71c58e0039750a3b7b,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
注:token很重要,節點加入集羣必不可少
4 創建Apiserver的配置文件
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--advertise-address=172.24.124.222 \\
--allow-privileged=true \\
--authorization-mode=Node,RBAC \\
--anonymous-auth=false \\
--bind-address=172.24.124.222 \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--enable-admission-plugins=NodeRestriction \\
--enable-bootstrap-token-auth=true \\
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/etcd.pem \\
--etcd-keyfile=/opt/etcd/ssl/etcd-key.pem \\
--etcd-servers=https://172.24.124.222:2379 \\
--kubelet-https=true \\
--kubelet-client-certificate=/opt/kubernetes/ssl/kubernetes.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/kubernetes-key.pem \\
--secure-port=6443 \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-cluster-ip-range=10.30.0.0/16 \\
--service-node-port-range=20000-40000 \\
--tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \\
--token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \\
--runtime-config=api/all=true \\
--logtostderr=false \\
--log-dir=/opt/kubernetes/logs \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--v=2 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF
注:
–logtostderr:啓用日誌
—v:日誌等級
–log-dir:日誌目錄
–etcd-servers:etcd集羣地址
–bind-address:監聽地址
–secure-port:https安全端口
–advertise-address:集羣通告地址
–allow-privileged:啓用授權
–service-cluster-ip-range:Service虛擬IP地址段
–enable-admission-plugins:准入控制模塊
–authorization-mode:認證授權,啓用RBAC授權和節點自管理
–enable-bootstrap-token-auth:啓用TLS bootstrap機制
–token-auth-file:bootstrap token文件
–service-node-port-range:Service nodeport類型默認分配端口範圍
–kubelet-client-xxx:apiserver訪問kubelet客戶端證書
–tls-xxx-file:apiserver https證書
–etcd-xxxfile:連接Etcd集羣證書
–audit-log-xxx:審計日誌
5 添加systemd管理
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
6 啓動kube-apiserver服務
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
systemctl status kube-apiserver
六 部署controller-manager
1 創建 controller-manager 配置文件
創建kube-controller-manager配置文件
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-name=kubernetes \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.30.0.0/16 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF
注:
–master:通過本地非安全本地端口8080連接apiserver。
–leader-elect:當該組件啓動多個時,自動選舉(HA)
–cluster-signing-cert-file/–cluster-signing-key-file:自動爲kubelet頒發證書的CA,與apiserver保持一致
2 添加systemd管理
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
3 啓動kube-controller-manager
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager
七 部署Kub-Scheduler
1 創建 Scheduler 配置文件
創建kube-scheduler配置文件
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF
注:
–master:通過本地非安全本地端口8080連接apiserver。
–leader-elect:當該組件啓動多個時,自動選舉(HA)
2 添加 systemd管理
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
3 啓動服務
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
systemctl status kube-scheduler
八 部署kubectl
1 創建csr並生成證書
1) 創建 admin csr文件
cat > /data/ssl/admin-csr.json << EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
2)生成證書文件
cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
-ca-key=/opt/kubernetes/ssl/ca-key.pem \
-config=/opt/kubernetes/ssl/ca-config.json \
-profile=kubernetes admin-csr.json | cfssljson -bare admin
3)將證書文件拷貝到node節點(master可操作也可以不操作取決於master是否運行Pod)
cp /data/ssl/admin*.pem /opt/kubernetes/ssl/
4)下面的操作 設置集羣設置集羣上下文設置集羣證書的真正目的是生成一個授信的 admin用戶用管理集羣
/opt/kubernetes/bin/kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://172.24.124.222:6443
/opt/kubernetes/bin/kubectl config set-credentials admin \
--client-certificate=/opt/kubernetes/ssl/admin.pem \
--embed-certs=true \
--client-key=/opt/kubernetes/ssl/admin-key.pem
/opt/kubernetes/bin/kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin
/opt/kubernetes/bin/kubectl config use-context kubernetes
注: /root/目錄下創建一個.kube/config 文件 未來kubectl 與 API通訊會使用到這個文件 如果在其他幾點運行kubectl 需要將這個文件拷貝過去
九 部署kubelet
1 生成 bootstrap.kubeconfig文件
1)創建clusterrolebinding 用於綁定kubelet上面可以對集羣有一定的權限
/opt/kubernetes/bin/kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
2 )同kubectl設置相同目的是爲了生成一個有權限的賬戶並生成一個配置文件用於其他節點加入集羣
/opt/kubernetes/bin/kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://172.24.124.222:6443 \
--kubeconfig=bootstrap.kubeconfig
/opt/kubernetes/bin/kubectl config set-credentials kubelet-bootstrap \
--token=bf20cc75c3d0fb71c58e0039750a3b7b \
--kubeconfig=bootstrap.kubeconfig
/opt/kubernetes/bin/kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
/opt/kubernetes/bin/kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
2)將bootstrap.kubeconfig拷貝到節點指定目錄
cp /data/ssl/bootstrap.kubeconfig /opt/kubernetes/cfg
2 創建kubelet配置文件
1)創建kubelet.conf
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8scluster \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernet
es/ssl \\
--cni-conf-dir=/etc/cni/net.d \\
--cni-bin-dir=/opt/kubernetes/bin/cni \\
--pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.1"
EOF
2)創建kubelet-config.yml文件
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: systemd
clusterDNS:
- 10.30.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
注:
文件重要配置,1 clusterDNS即dns的地址配置爲規劃地址,2 anonymous設置爲false關閉不安全鏈接, 3 clientCAFile 配置爲ca證書因爲客戶端也是通過ca生成證書所以可以通過認證
3)創建工作目錄
mkdir -p /var/lib/kubelet
3 創建 systemd管理服務
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
4 啓動服務
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
5 查看 csr 和 准入集羣
1)查看待加入的集羣(由於開啓集羣加入認證,現需要手動授權允許)
/opt/kubernetes/bin/kubectl get csr
2)同意所有
/opt/kubernetes/bin/kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs /opt/kubernetes/bin/kubectl certificate approve
十 部署kube-proxy
1 安裝 ipvs
yum install -y ipvsadm ipset conntrack
注:service 的轉發方式有兩種,第一種就是通過iptables模式轉發,就是不斷增減防火牆規則且轉發效率慢,第二種則是同ipvs進行轉發,效率上會提升很多,切不會頻繁改變防火牆規則
2 生成kube-proxy證書
1)創建kube-proxy-csr.json用於生成證書
cat > /data/ssl/kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
2)生成證書
cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
-ca-key=/opt/kubernetes/ssl/ca-key.pem \
-config=/opt/kubernetes/ssl/ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
3)將證書文件拷貝到相關節點指定目錄
cp /data/ssl/kube-proxy*.pem /opt/kubernetes/ssl/
3 生成 kube-proxy.kubeconfig文件
1)生成節點認證配置文件和集羣用戶
方便kubectl 使用可以給kubectl創建軟鏈接
ln -s /opt/kubernetes/bin/kubectl /usr/bin/kubectl
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://172.24.124.222:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
2)將配置文件拷貝到相關節點指定目錄
cp /data/ssl/kube-proxy.kubeconfig /opt/kubernetes/cfg/
3)創建工作目錄
mkdir /var/lib/kube-proxy
4 創建系統服務
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
5 創建配置文件
1)創建kube-proxy.conf
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
注:每個運行kube-proxy節點都需要創建
2)創建kube-proxy-config.yml
cat <<EOF >/opt/kubernetes/cfg/kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8scluster
clusterCIDR: 10.30.0.0/16
mode: ipvs
ipvs:
scheduler: "rr"
syncPeriod: "5s"
iptables:
masqueradeAll: true
EOF
注:
-- 每個運行kube-proxy節點都需要創建
-- hostnameOverride的配置需要與kubelet保持一致
-- clusterCIDR service 地址段
-- ipvs 開啓Ipvs
-- scheduler ipvs算法模式
6 啓動 kube-proxy
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
7 查看 ipvs轉發
ipvsadm -L -n
十一 Flannel CNI集成
1 部署cni
1)創建cni程序文件目錄 所有節點創建
mkdir /opt/kubernetes/bin/cni -p
2)下載cni
cd /data/soft/
wget https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
3)解壓到指定目錄
tar xf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/kubernetes/bin/cni
2 部署 flannel
1)下載flannel yml文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
2)修改網絡模式:host-gw
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "host-gw"
}
注:
-- Type 爲網絡模式
-- Network 爲定義Pod網段
3)應用部署yml文件:
kubectl apply -f kube-flannel.yml
4)查看flannel pod 是否創建:
kubectl get pods -n kube-system
5)查看節點狀態:
kubectl get node
3 apiserver和kubelet通信鑑權
1)創建服務賬戶
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
2)發佈rbac資源
kubectl apply -f /opt/k8s-service/apiserver-to-kubelet-rbac.yaml
注:
-- 當使用 kubectl 和 kubelet通信時時使用 apiserver調度,通過安全端口通信,當kubelet 設置anonymous爲flase則需要授權才能進行訪問,如果是true則使用只讀端口 10255進行通信則不需要鑑權
十二 部署CoreDns
1 下載官方yml文件
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/coredns/coredns.yaml.base
2 修改配置文件
1)修改clusterIP:
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.30.0.2
cluster IP 需要修改定義的地址
2)修改地址後綴:
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local. in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
3)修改資源限制:
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 70Mi
3 部署
1) 部署
kubectl apply -f coredns.yaml.base
2) 查看pod
kubectl get po -n kube-system