1. 安裝虛擬機
我們需要安裝三個虛擬機節點,我們可以先安裝其中1個節點,然後利用virtualbox的複製功能,再複製出另外兩個節點。
最低環境要求如下:
- 操作系統: CentOS 7.7
- 內存:2G
- CPU:2核
- 硬盤:20G
1.1 安裝VirtualBox
步驟比較簡單,忽略
1.2 下載CentOS系統安裝鏡像
CentOS系統下載地址,截止20190921,兩個源信息如下:
-
華東師大源:http://mirrors.ecnu.edu.cn/centos/ (有7.7以前版本)
-
清華源:https://mirrors.tuna.tsinghua.edu.cn/centos/(只有7.7最新版本)
我們可以使用清華源下載7.7最新版本的centos,鏈接爲:https://mirrors.tuna.tsinghua.edu.cn/centos/7.7.1908/isos/x86_64/CentOS-7-x86_64-DVD-1908.iso
1.3 創建虛擬機
- (1)打開VirtualBox主界面,點擊「新建」
- (2)輸入名稱,並注意類型選擇「Linux」,版本選擇「Red Hat(64-bit)」
- (3)創建虛擬硬盤,大小選擇20G
完畢後點擊「創建」
- (4)在主界面右鍵該虛擬機,點擊「設置」。首先選擇網絡,網絡連接方式設置爲「橋接網卡」,這樣,虛擬機擁有和宿主機同網段的IP,可以實現宿主機和虛擬機的互相訪問
- (5)在系統選項中,處理器數量設置爲2
- (6)在存儲選項中,增加我們下載好的CentOS 7.7 操作系統鏡像
-
(7)設置完成後,啓動我們的虛擬機,並選擇「Install CentOS 7」
-
(8)軟件安裝這裏,爲了不安裝不必要的軟件,我們選擇「最小安裝」
-
(9)點擊「開始安裝」,在安裝開始後,我們可以在該界面設置一下ROOT密碼
等待一會兒,安裝完成之後,會提醒我們重啓,重啓後就可以登入我們的虛擬機了。最小安裝不會安裝圖形界面,所以我們默認是進入命令行界面,通過root用戶以及設置的root密碼就可以登入了。
2. 配置虛擬機環境
2.1 配置遠程訪問環境
在VirtualBox中直接操作比較麻煩,所以我們可以先配置一下遠程訪問,系統中默認已經開啓了ssh-server,所以我們只需要知道虛擬機的IP,就可以在宿主機通過ssh連到虛擬機進行操作了。
2.1.1 安裝net-tools工具
最開始,虛擬機沒有分配IP,無法ping,也無法通過yum安裝任何軟件。通過dhclient
命令可以讓虛擬機自動分配到IP地址。
dhclient
並且,要把/etc/sysconfig/network-scripts/ifcfg-{網絡設備名}
最後的ONBOOT=no
改爲ONBOOT=yes
(網絡設備名可以通過ifconfig
命令查看,ifconfig
命令沒有的話要先執行下一步,安裝net-tools
)
sed -i '/ONBOOT=n\|ONBOOT=y/c\ONBOOT=yes' /etc/sysconfig/network-scripts/ifcfg-enp0s3
然後執行:
yum -y install net-tools
2.1.2 獲取本虛擬機IP地址
2.1.3 在宿主機通過ssh登錄虛擬機
之後的操作都可以在宿主機進行啦!
2.2 配置yum源
執行如下命令配置yum源爲阿里源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum makecache
2.3 關閉防火牆
防火牆一定要提前關閉,否則在後續安裝K8S集羣的時候是個麻煩。執行下面語句關閉,並禁止開機啓動:
systemctl stop firewalld && systemctl disable firewalld
2.4 關閉Swap
在安裝K8S集羣時,Linux的Swap內存交換機制是一定要關閉的,否則會因爲內存交換而影響性能以及穩定性。這裏,我們可以提前進行設置:
- 執行
swapoff -a
可以臨時關閉,但系統重啓後會恢復 - 編輯
/etc/fstab
,註釋掉包含swap
那一行就可以永久關閉了,可以執行如下命令註釋掉該行:
sed -i '/ swap / s/^/#/' /etc/fstab
2.5 安裝docker
安裝docker可以參考官方文檔:https://docs.docker.com/install/linux/docker-ce/centos/#prerequisites, 依次執行如下命令即可:
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
配置阿里倉庫:
yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安裝docker:
yum install -y docker-ce docker-ce-cli containerd.io
啓動docker服務並激活開機啓動:
systemctl start docker && systemctl enable docker
安裝成功之後可以通過docker version
命令查看docker的版本,最新的版本是19.03:
可以運行一個hello-world鏡像測試docker工作是否正常:
docker run --rm hello-world
2.6 安裝kubernetes
我們利用kubernetes官方提供的kubeadm工具來安裝kubernetes集羣,官方文檔可以參考:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
2.6.1 配置kubernetes的yum源
官方倉庫無法使用,建議使用阿里源的倉庫,執行如下命令添加kubernetes.repo
倉庫
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.6.2 關閉 SeLinux
- 執行
setenforce 0
可以臨時關閉 - 永久關閉需要修改
/etc/sysconfig/selinux
的文件設置
sed -i '/SELINUX=e\|SELINUX=p\|SELINUX=d/c\SELINUX=disabled' /etc/sysconfig/selinux
2.6.3 安裝kubernetes組件
yum install -y kubelet kubeadm kubectl
2.6.4 啓動 kubelet
systemctl enable kubelet && systemctl start kubelet
3. 複製虛擬機
當第一個虛擬機安裝完成之後,就可以利用virtualBox的複製功能,複製出另外兩個虛擬機。
3.1 關閉第一臺虛擬機
複製前要先關閉第一臺虛擬機。
shutdown -h now
3.2 複製
右鍵虛擬機,然後點擊「複製」
- 第一臺虛擬機我們命名爲centos-master,作爲k8s集羣的master節點。這一臺虛擬機命名爲
centos-worker1
,作爲工作節點1 - 副本類型選擇「完全複製」
- MAC地址設定選擇「爲所有網卡重新生成MAC地址」
點擊複製之後就可以創建出一臺一模一樣的虛擬機,利用同樣的方式再從master複製出centos-worker2
3.3 啓動三臺虛擬機
啓動三臺虛擬機,並執行ifconfig
命令,可以獲取三個節點的IP地址分別爲:
- centos-master:192.168.0.12
- centos-worker1: 192.168.0.13
- centos-worker2: 192.168.0.14
這三個節點的IP地址都是和宿主機在同一個網段,相互之間都可以ping通。接下來我們可以在宿主機上同時通過ssh登錄這三臺虛擬機進行操作。
3.4 設置虛擬機
以centos-master虛擬機爲例,輸入以下命令設置hostname、配置hosts:
cat > /etc/hostname <<< 'k8s-master' && cat >> '/etc/hosts' <<< '192.168.0.12 k8s-master'
另外兩臺虛擬機的hostname分別設置爲k8s-worker1
和k8s-worker2
,以及配置他們的hosts:
cat > /etc/hostname <<< 'k8s-worker1' && cat >> '/etc/hosts' <<< '192.168.0.13 k8s-worker1'
cat > /etc/hostname <<< 'k8s-worker2' && cat >> '/etc/hosts' <<< '192.168.0.14 k8s-worker2'
修改之後,必須重新啓動虛擬機才能生效。
4. 創建集羣
前面的工作都準備好後,我們就可以真正創建集羣了。我們使用官方提供的kubeadm工具,官方文檔可以參考:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
4.1 初始化k8s集羣
在k8s-master節點上執行如下命令:
kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository="gcr.azk8s.cn/google_containers" --kubernetes-version=v1.16.0 --apiserver-advertise-address=192.168.0.12
- 選項–pod-network-cidr=10.244.0.0/16,指定了pod的子網劃分地址。因爲後面要使用flannel網絡插件,所以這裏要指定flannel規定的cidr地址
- 選項–image-repository="gcr.azk8s.cn/google_containers,指定了容器鏡像的倉庫地址,因爲國內無法訪問google官方的鏡像倉庫地址(你懂的)
- 選項–kubernetes-version=v1.16.0,指定了我們要安裝的k8s的版本,這裏我使用了當前(20190922)最新的版本v1.16.0 。 隨着時間的推移,這個版本會不斷更新,你安裝的時候記得把該版本指定爲最新版本
- 選項–apiserver-advertise-address=192.168.0.12,表示api-server綁定的網卡地址,這裏就是當前k8s-master這個節點的IP地址
注: 使用kubeadm init 命令出錯或者強制終止,則再次執行該命令之前,要先執行
kubeadm reset
進行重置
執行該命令提示了一個錯誤:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
我們將/proc/sys/net/bridge/bridge-nf-call-iptables
的內容設置爲1,然後執行kubeadm reset
之後再進行初始化
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
執行完成之後,得到k8s集羣初始化成功的消息:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.12:6443 --token 2i80rk.ca8tjurnpp0yf8h8 \
--discovery-token-ca-cert-hash sha256:60bac2e3c44d074669801486c9f3a10ef60633dbfebbffb5db8ccf7ebe2bed88
先執行下面三條指令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
另外上面的信息還提示我們要創建網絡,並且可以讓其他節點通過kubeadm join
命令加入集羣。
4.2 創建網絡
如果不創建網絡,我們查看pod的狀態,可以看到coredns的pod是pending狀態,集羣也是不可用的:
大家可以參考官方文檔,選擇合適的網絡插件。這裏,我使用的是flannel。它的官方地址是:https://github.com/coreos/flannel 。
我們可以通過kubectl apply 倉庫裏的kube-flannel.yml(https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml)文件來安裝flannel。但是要注意的是,flannel的鏡像很可能你拉不下來。所以我們需要將這份文件修改鏡像之後再apply。將flannel的鏡像從quay.io/coreos/flannel:v0.11.0-amd64
換成quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
,後者是七牛雲提供的鏡像,在國內訪問是沒問題的。
yml文件的鏡像修改之後,我們通過下述命令創建flannel 網絡:
kubectl apply -f- <<\EOF
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unsed in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: arm64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: arm
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: ppc64le
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: s390x
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
EOF
4.3 No networks found in /etc/cni/net.d錯誤解決
在創建完flannel之後,等了很久,我發現coredns的pod還是處於pending狀態。並且master節點也一直是NotReady狀態:
通過執行systemctl status kubectl
命令查看狀態,發現kubectl的日誌中,有類似Unable to update cni config: No networks found in /etc/cni/net.d
的錯誤:
錯誤信息是類似這樣的:
kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:dock... uninitialized
cni.go:202] Error validating CNI config &{cbr0 false [0xc001659e40 0xc001659ec0] [123 10 32 32 34 110 97 109 101 ...12 101 34 58 3
cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d
找了很久的原因,終於在stackoverflow上看到了這個問題的解決方案,參考:
是因爲 /etc/cni/net.d/10-flannel.conflist 配置文件中少了"cniVersion": "0.2.0"
字段:
將"cniVersion": "0.2.0"
加上,重新寫入該文件:
cat > /etc/cni/net.d/10-flannel.conflist <<\EOF
{
"name": "cbr0",
"cniVersion": "0.2.0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
EOF
過了一會兒,就會發現所有的pod都處於running狀態了:
master節點也ready了:
4.4 加入k8s-worker1和k8s-worker2節點
在另外兩個worker節點上分別執行初始化集羣成功時提示的kubeadm join
命令。在執行命令之前也要先把/proc/sys/net/bridge/bridge-nf-call-iptables
文件內容設置爲1
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
然後執行kubeadm join
命令:
kubeadm join 192.168.0.12:6443 --token 2i80rk.ca8tjurnpp0yf8h8 \
--discovery-token-ca-cert-hash sha256:60bac2e3c44d074669801486c9f3a10ef60633dbfebbffb5db8ccf7ebe2bed88
執行之後發現worker節點也一直處於NotReady狀態,通過在master節點執行kubectl describe node k8s-worker1
發現,還是flannel網絡的/etc/cni/net.d/10-flannel.conflist
文件的問題,同樣的,我們在兩個worker節點上執行如下命令:
cat > /etc/cni/net.d/10-flannel.conflist <<\EOF
{
"name": "cbr0",
"cniVersion": "0.2.0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
EOF
之後過一小會兒,我們再在master節點上執行kubectl get nodes
命令,發現所有的節點都處於Ready狀態了:
這也說明我們的三個節點的k8s集羣已經搭建成功!
5. 安裝低版本的k8s
上述步驟我們在安裝docker、kubelet、kubeadm等這些組件的時候沒有指定版本,所以每次都是安裝最新的版本。如果需要安裝指定版本,則需要在安裝過程中指定安裝的版本。
比如,安裝v1.11.9版本的k8s
docker 版本指定爲18.03.0.ce-1.el7.centos
yum install -y docker-ce-18.03.0.ce-1.el7.centos docker-ce-cli-18.03.0.ce-1.el7.centos containerd.io
kubelet、kubeadm、kubelet等組件的版本指定爲v1.11.9
yum install -y kubelet-1.11.9 kubeadm-1.11.9 kubectl-1.11.9
由於1.11.9版本的kubeadm無法通過--image-repository
參數指定鏡像倉庫的地址,所以要提前把k8s相關的鏡像拉到機器上,並且進行reTag操作:
docker pull gcr.azk8s.cn/google_containers/kube-proxy:v1.11.9
docker tag gcr.azk8s.cn/google_containers/kube-proxy:v1.11.9 k8s.gcr.io/kube-proxy-amd64:v1.11.9
docker pull gcr.azk8s.cn/google_containers/kube-apiserver:v1.11.9
docker tag gcr.azk8s.cn/google_containers/kube-apiserver:v1.11.9 k8s.gcr.io/kube-apiserver-amd64:v1.11.9
docker pull gcr.azk8s.cn/google_containers/kube-controller-manager:v1.11.9
docker tag gcr.azk8s.cn/google_containers/kube-controller-manager:v1.11.9 k8s.gcr.io/kube-controller-manager-amd64:v1.11.9
docker pull gcr.azk8s.cn/google_containers/kube-scheduler:v1.11.9
docker tag gcr.azk8s.cn/google_containers/kube-scheduler:v1.11.9 k8s.gcr.io/kube-scheduler-amd64:v1.11.9
docker pull gcr.azk8s.cn/google_containers/pause:3.1
docker tag gcr.azk8s.cn/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker pull gcr.azk8s.cn/google_containers/etcd:3.2.18
docker tag gcr.azk8s.cn/google_containers/etcd:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
docker pull gcr.azk8s.cn/google_containers/coredns:1.1.3
docker tag gcr.azk8s.cn/google_containers/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3
最後執行:
kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.11.9 --apiserver-advertise-address=10.177.106.145
並且指定--kubernetes-version=v1.11.9