創建虛擬機集羣
VMware教程
VirtualBox教程
也可以在各大雲供應商處購買幾臺服務器,比自己配置更省心
建議先創建一臺虛擬機,安裝好docker,k8s,以及需要的工具(curl,vim等)和鏡像後再克隆成多個,節省在多個機器安裝工具的步驟
安裝docker
安裝 K8s-v1.23 高可用集
需要阿里雲ecs獨立部署k8s集羣的看這裏
需要arm64架構方案的看這裏
唯一要注意的就是ubuntu在20後已經用keyrings代替了原來的apt-key,網上很多教程的sudo curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
寫法是無法正常工作的,不過Docker官方已經做出了正確示例
$ sudo apt-get update
$ sudo apt-get install ca-certificates curl gnupg lsb-release
$ sudo mkdir -p /etc/apt/keyrings
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
$ sudo apt-get update
$ sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
kubernetes默認設置cgroup驅動(cgroupdriver)爲"systemd",而docker服務的cgroup驅動默認爲"cgroupfs",建議將其修改爲"systemd",與kubernetes保持一致,
除此之外這時還可以添加docker鏡像和日誌的配置
$ docker info # 用來查看cgroup
$ sudo vim /etc/docker/daemon.js
# 添加如下
# {
# "registry-mirrors": ["https://ch72w18w.mirror.aliyuncs.com"],
# "exec-opts": ["native.cgroupdriver=systemd"],
# “log-driver”:“json-file”,
# "log-opts": {"max-size":"100m", "max-file":"3"}
# }
配置後記得重啓docker
sudo systemctl daemon-reload
sudo systemctl restart docker
安裝k8s
$ curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
$ echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update
$ sudo apt-get install -y kubelet=1.21.14-00 kubeadm=1.21.14-00 kubectl=1.21.14-00
k8s v1.24後棄用dockershim,因此v1.24和之前的版本操作邏輯有很大區別,因此下載時最好提前明確指定版本。
拷貝並生成好幾臺虛擬機
右鍵 -> 複製
按照網絡配置裏記錄的修改新機器的hostname和ip
爲新機器生成新的密鑰,並使用ssh-copy-id -i /home/<username1>/.ssh/id_rsa.pub <username2>@192.168..X.XXX
將密鑰分發給其他機器,如果都是root user可以使用ssh-copy-id 192.168.X.XXX
至此新的worker節點上應該:
- 配置好了網絡
- 添加了ssh key
- 安裝了docker和k8s三件套
啓動集羣 【master節點】
k8s v1.24後棄用dockershim,因此v1.24和之前的版本操作邏輯有很大區別
啓動k8s v1.21
啓動k8s需要的鏡像在國內很難下載,所以要預先查看需要的鏡像
kubeadm config images list
並從阿里雲拉取鏡像並改名
需要注意coredns,k8s官方鏡像比阿里雲鏡像多一層repo
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.14
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.14
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.14
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.21.14
docker pull registry.aliyuncs.com/google_containers/pause:3.4.1
docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.0
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.14 k8s.gcr.io/kube-apiserver:v1.21.14
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.14 k8s.gcr.io/kube-controller-manager:v1.21.14
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.14 k8s.gcr.io/kube-scheduler:v1.21.14
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.21.14 k8s.gcr.io/kube-proxy:v1.21.14
docker tag registry.aliyuncs.com/google_containers/pause:3.4.1 k8s.gcr.io/pause:3.4.1
docker tag registry.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker tag registry.aliyuncs.com/google_containers/coredns:v1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0
預演啓動集羣,發現報錯
$ kubeadm init phase preflight
[WARNING SystemVerification] missing optional cgroups: blkio
[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[ERROR CRI]: container runtime is not running
第二個報錯自行調整虛擬機CPU數量,第三個報錯如下處理後重試
rm /etc/containerd/config.toml
systemctl restart containerd
preflight問題都解決後,重新正式啓動
kubeadm init --kubernetes-version=v1.21.14 --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.1.111 --pod-network-cidr=10.244.0.0/12 --ignore-preflight-errors=Swap
除了命令行裏的flag之外,k8s官方現在比較推薦通過config file來配置參數
初始化參數模板可以用
kubeadm config print init-config.yml # 初始化集羣默認參數
kubeadm config print join-config.yml # 加入集羣默認參數
# 文件可以不帶後綴,或者寫成.conf, .yaml
參數的詳細解釋見這裏
初始化時調用配置
kubeadm init --config init-config.yml
如果使用阿里雲ecs且沒有給容器配置公網IP,會造成初始化timeout的問題,解決方案可以看[這裏](https://blog.csdn.net/weixin_47678667/article/details/121680938)
Bug
使用配置文件init時會有如下報錯,但flag就不會
Feb 27 08:25:51 spinq-master kubelet[147373]: I0227 08:25:51.884106 147373 eviction_manager.go:339] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage"
Feb 27 08:25:51 spinq-master kubelet[147373]: I0227 08:25:51.884248 147373 container_gc.go:85] "Attempting to delete unused containers"
Feb 27 08:25:51 spinq-master kubelet[147373]: I0227 08:25:51.913385 147373 image_gc_manager.go:321] "Attempting to delete unused images"
Feb 27 08:25:51 spinq-master kubelet[147373]: I0227 08:25:51.944040 147373 eviction_manager.go:350] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage"
Feb 27 08:25:51 spinq-master kubelet[147373]: I0227 08:25:51.944706 147373 eviction_manager.go:368] "Eviction manager: pods ranked for eviction" pods=[kube-system/kube-apiserver-spinq-master kube-system/etcd-spinq-master kube-system/kube-controller-manager-spinq-master kube-system/kube-scheduler-spinq-master kube-system/kube-proxy-kk5qc kube-system/kube-flannel-ds-84l5f]
Feb 27 08:25:51 spinq-master kubelet[147373]: E0227 08:25:51.945299 147373 eviction_manager.go:560] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-spinq-master"
Feb 27 08:25:51 spinq-master kubelet[147373]: E0227 08:25:51.945719 147373 eviction_manager.go:560] "Eviction manager: cannot evict a critical pod" pod="kube-system/etcd-spinq-master"
Feb 27 08:25:51 spinq-master kubelet[147373]: E0227 08:25:51.946068 147373 eviction_manager.go:560] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-spinq-master"
Feb 27 08:25:51 spinq-master kubelet[147373]: E0227 08:25:51.946436 147373 eviction_manager.go:560] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-spinq-master"
Feb 27 08:25:51 spinq-master kubelet[147373]: E0227 08:25:51.946787 147373 eviction_manager.go:560] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kk5qc"
Feb 27 08:25:51 spinq-master kubelet[147373]: E0227 08:25:51.947147 147373 eviction_manager.go:560] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-flannel-ds-84l5f"
Feb 27 08:25:51 spinq-master kubelet[147373]: I0227 08:25:51.947488 147373 eviction_manager.go:391] "Eviction manager: unable to evict any pods from the node"
網上說是因爲pod內部佔用磁盤空間過多導致的,還沒詳細看透,參考資料如下
Kubernetes eviction manager evicting control plane pods to reclaim ephemeral storage
Node-pressure Eviction](https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/)
成功的話會返回類似信息
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as
root: root:
kubeadm join 10.0.2.15:6443 --token fb4x67.zk0lses0315xvzla \
--discovery-token-ca-cert-hash sha256:17167b1f9f4294d12766b1977681f0aa3575b9d978d371aa774fc5b9d978d371aa774fcadc707ff51d
按照指示配置kubeconfig,這樣kubectl纔可以查看到集羣信息
普通user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
root user
export KUBECONFIG=/etc/kubernetes/admin.conf
root user需要注意的是,如果選擇了設置KUBECONFIG
,這個環境變量只是暫時的,機器重啓後kubectl會因爲找不到KUBECONFIG
而報如下錯誤
The connection to the server localhost:8080 was refused - did you specify the right host or port?
可以通過把export寫進/etc/profile
裏解決
配置好後可以查看集羣信息
這時可以看到master node的狀態是NotReady,需要按指示安裝cni addon
重設token
kubadm生成的用來讓worker join集羣的token是24小時過期,隔天工作的時候需要重新查看和設置
查看token
kubeadm token list
如果token還未過期,可以進一步查看cert-hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
並拼接成命令供worker使用
kubeadm join 192.168.1.111:6443 --token <token> --discovery-token-ca-cert-hash sha256:<cert-hash>
如果token已經過期了則需要重新生成
kubeadm token create --print-join-command
flannel
因爲資源容易被牆,先下載配置文件
wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
然後按照預定的pod cidr決定是否修改參數net-conf.json
net-conf.json: |
{
"Network": "10.96.0.0/12",
"Backend": {
"Type": "vxlan"
}
}
然後將配置文件應用於集羣
$ kubectl apply -f kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
可能發生如下狀況
這時需要查看出錯pod的log
kubectl logs kube-flannel-ds-spw9v -n kube-system
如果遇到Error registering network: failed to acquire lease: node "spinq-master" pod cidr not assigned
,說明:
- pod cidr沒有配置
- pod cidr和
kube-flannel.yml
中的net-conf.json
中的Network
參數不一致
解決方法:
- 如init集羣時使用配置文件,則確認
kube-init-config.yml
中kind
爲ClusterConfiguration的配置中的networking
中的podSubnet
參數和kube-flannel.yml
中的net-conf.json
中的Network
參數一致 - 如init集羣時使用命令行flag,則確認
--pod-network-cidr
和kube-flannel.yml
中的net-conf.json
中的Network
參數一致
首次啓動成功後flannel會創建兩張虛擬網卡,cni0
和flannel.1
,且創建/etc/cni/net.d/10-flannel-conflist
和/run/flannel/subnet.env
kubeadm reset
後網卡不會消失,/etc/cni/net.d/10-flannel-conflist
和/run/flannel/subnet.env
也不會消失,後續重啓集羣前如果不刪除乾淨,可能會由於改變網絡配置而導致集羣通信失敗
可能導致的bug
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "722424eeb5901d17fb90d720180f92a2683873110daccf82dc9c7e82f4ac665b" network for pod "coredns-59d64cd4d4-c7sm7": networkPlugin cni failed to set up pod "coredns-59d64cd4d4-c7sm7_kube-system" network: failed to delegate add: failed to set bridge addr: "cni0" already has an IP address different from 172.244.0.1/24
worker連接到master後也會有一樣的網卡和配置文件,貿然刪除網卡或配置文件的話也可能會導致worker連不上master的6443端口(curl https://<master-ip>:6443 failed),具體原因不明,重啓後恢復正常
Calico
Calico的quickstart走的不是配置文件的途徑,有些令人費解,希望用熟悉的配置文件方法配置的看這裏
同樣先下載配置文件
https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises
Calico的配置需要注意的是,calico-node默認訪問kubernetes SVC的443端口,這將導致無法訪問apiserver,需要在yaml文件添加apiserver的IP和端口
應用配置文件
kubectl apply -f calico.yaml
安裝cni addon成功後kubectl get nodes
的結果會變成Ready
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
spinq-master Ready control-plane,master 108m v1.21.14
PS: calico不管是本地虛擬機還是阿里雲ecs上啓動都特別慢,教程一般說20s,但我都要等20分鐘,不知道是內核資源調度原因還是下載資源的網絡原因
PS: 如果使用kubeadm reset
重置集羣,會提示要刪除/etc/cni/net.d
中的cni配置,若刪除了這個文件夾,則下一次kubeadm init
會報錯container network is not ready: cni config unitialized
,就算再次運行kubectl apply -f calico.yaml
也不一定會好,需要systemctl restart kubelet
一下,感覺是因爲自動更新機制有問題
可以通過kubectl get pods -A
查看其狀態,以及其他所有pod的狀態
也可以通過docker ps
查看不同功能的容器是否正常啓動
PS:kubectl get cs
的命令的scheduler和controller-manager在各種情況下都有可能爲unhealthy
,且已經在v1.19後棄用,因此不需要太擔心詳情
部署基於dockershim的k8s集羣-1
部署基於dockershim的k8s集羣-2
啓動k8s v1.24
事先安裝鏡像
kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers
k8s v1.24後已經棄用dockershim,默認使用containerd,實際上使用的鏡像要通過crictl img
才能查看
ctr -n k8s.io image tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.2 k8s.gcr.io/kube-apiserver:v1.24.2
ctr -n k8s.io image tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.2 k8s.gcr.io/kube-controller-manager:v1.24.2
ctr -n k8s.io image tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.2 k8s.gcr.io/kube-scheduler:v1.24.2
ctr -n k8s.io image tag registry.aliyuncs.com/google_containers/kube-proxy:v1.24.2 k8s.gcr.io/kube-proxy:v1.24.2
ctr -n k8s.io image tag registry.aliyuncs.com/google_containers/etcd:3.5.3-0 k8s.gcr.io/etcd:3.5.3-0
ctr -n k8s.io image tag registry.aliyuncs.com/google_containers/coredns:v1.8.6 k8s.gcr.io/coredns/coredns:v1.8.6
安裝containerd和k8s v1.24 - 1
安裝containerd和k8s v1.24 - 2
kubeadm init --kubernetes-version=v1.24.2 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=Swap
kubelet可以開始運行,但是集羣仍然報錯
用systemctl status kubelet
和journalctl -xeu kubelet
查看日誌,這裏定位問題是需要一些耐心的
發現是包的版本問題,說明之前按照kubeadm config images list
查看到的鏡像也並非全部可靠
Jun 28 09:35:57 spinq-master kubelet[10593]: E0628 09:35:57.819736 10593 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"k8s.gcr.io/pause:3.6\": failed to pull image \"k8s.gcr.io/pause:3.6\": failed to pull and unpack image \"k8s.gcr.io/pause:3.6\": failed to resolve reference \"k8s.gcr.io/pause:3.6\": failed to do request: Head \"https://k8s.gcr.io/v2/pause/manifests/3.6\": dial tcp 64.233.189.82:443: i/o timeout"
添加需要的包,有關於containerd的操作看這裏
ctr -n k8s.io image pull registry.aliyuncs.com/google_containers/pause:3.6
ctr -n k8s.io image tag registry.aliyuncs.com/google_containers/pause:3.6 k8s.gcr.io/pause:3.6
添加節點 【worker節點】
使用init最後產生的命令將worker節點加入集羣
$ kubeadm join 10.0.2.15:6443 --token fb4x67.zk0lses0315xvzla --discovery-token-ca-cert-hash sha256:17167b1f9f4294d12766b1977681f0aa3575b9d978d371aa774fc5b9d978d371aa774fcadc707ff51d
...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
同樣可以通過systemctl status kubelet
和journalctl -xeu kubelet
來監控節點是否成功啓動,如果無法成功啓動的話,大多數錯誤還是要去master上kubectl describe
node或者pod來定位
部署服務
以部署一個nginx服務爲例
k8s服務的基本架構 - 1
k8s服務的基本架構 - 2
k8s服務的基本架構 - 3
將下列配置寫入master節點上的nginx-conf.yml文件
apiVersion: v1
kind: Namespace
metadata:
name: nginx-demo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-demo-deploy
namespace: nginx-demo
spec:
replicas: 2
selector:
matchLabels:
app: nginx-tag
template:
metadata:
labels:
app: nginx-tag
spec:
containers:
- name: nginx-ct
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-demo-service
namespace: nginx-demo
spec:
type: NodePort
selector:
app: nginx-tag # match the template metadata label in Deployment
ports:
- protocol: TCP
port: 3088 # match for service access port
targetPort: 80 # match for pod access port
nodePort: 30088 # match for external access port
應用配置文件
$ kubectl apply -f nginx-conf.yml
namespace/nginx-demo unchanged
deployment.apps/nginx-demo-deploy created
service/nginx-demo-service configured
查看生成的pod,deployment,和service
注意kubectl get svc
在不標註namesapce的情況下只顯示default
namespace下的服務,因此一定要帶-n
參數
$ kubectl get pods -n nginx-demo -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-demo-deploy-784fb48fbc-kcjpl 1/1 Running 0 18s 10.240.1.11 spinq-worker1 <none> <none>
nginx-demo-deploy-784fb48fbc-njw9n 1/1 Running 0 18s 10.240.1.10 spinq-worker1 <none> <none>
$ kubectl get deploy -n nginx-demo
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-demo-deploy 1/1 1 1 19s
$ kubectl get svc nginx-demo-service -n nginx-demo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-demo-service NodePort 10.109.6.66 <none> 3088:30008/TCP 23s
訪問http://192.168.1.111:30088
可以看到nginx的默認頁面