1.準備基礎環境
我們將使用kubeadm部署3個節點的 Kubernetes Cluster
節點詳細信息:
節點主機名 | 節點IP | 節點角色 | 操作系統 | 節點配置 |
---|---|---|---|---|
k8s-master | 192.168.217.131 | master | CentOS7.6 | 2C4G |
k8s-node1 | 192.168.217.132 | node | CentOS7.6 | 2C4G |
k8s-node2 | 192.168.217.133 | node | CentOS7.6 | 2C4G |
節點組件分佈:
Master 和 Node 節點由於分工不一樣,所以安裝的服務不同,最終安裝完畢,Master 和 Node 啓動的核心服務分別如下:
Master節點 | Node節點 |
---|---|
kube-apiserver | kube-flannel |
kube-scheduler | other apps |
kube-proxy | — |
etcd | — |
coredns | — |
kube-flannel | — |
無特殊說明以下操作在所有節點執行:
修改主機名(master與node都要執行):
#master節點:
hostnamectl set-hostname k8s-master
#node1節點:
hostnamectl set-hostname k8s-node1
#node2節點:
hostnamectl set-hostname k8s-node2
基本配置(master與node都要執行):
#修改/etc/hosts文件
cat >> /etc/hosts << EOF
192.168.217.131 k8s-master
192.168.217.132 k8s-node1
192.168.217.133 k8s-node2
EOF
#關閉防火牆和selinux
systemctl stop firewalld && systemctl disable firewalld
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config && setenforce 0
#關閉swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab
配置時間同步(master與node都要執行):
使用chrony同步時間,配置master節點與網絡NTP服務器同步時間,所有node節點與master節點同步時間。
配置master節點:
#安裝chrony:
yum install -y chrony
#註釋默認ntp服務器
sed -i 's/^server/#&/' /etc/chrony.conf
#指定上游公共 ntp 服務器,並允許其他節點同步時間
cat >> /etc/chrony.conf << EOF
server 0.asia.pool.ntp.org iburst
server 1.asia.pool.ntp.org iburst
server 2.asia.pool.ntp.org iburst
server 3.asia.pool.ntp.org iburst
allow all
EOF
#重啓chronyd服務並設爲開機啓動:
systemctl enable chronyd && systemctl restart chronyd
#開啓網絡時間同步功能
timedatectl set-ntp true
配置所有node節點:(注意修改master IP地址)
#安裝chrony:
yum install -y chrony
#註釋默認服務器
sed -i 's/^server/#&/' /etc/chrony.conf
#指定內網 master節點爲上游NTP服務器
echo server 192.168.217.131 iburst >> /etc/chrony.conf
#重啓服務並設爲開機啓動:
systemctl enable chronyd && systemctl restart chronyd
所有節點執行chronyc sources命令,查看存在以^*開頭的行,說明已經與服務器時間同步
設置網橋包經過iptalbes(master與node都要執行)
RHEL / CentOS 7上的一些用戶報告了由於iptables被繞過而導致流量路由不正確的問題。創建/etc/sysctl.d/k8s.conf文件,添加如下內容:
cat <<EOF > /etc/sysctl.d/k8s.conf
vm.swappiness = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# 使配置生效
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
kube-proxy開啓ipvs的前提條件(master與node都要執行)
由於ipvs已經加入到了內核的主幹,所以爲kube-proxy開啓ipvs的前提需要加載以下的內核模塊:
在所有的Kubernetes節點執行以下腳本:
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
#執行腳本
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
上面腳本創建了/etc/sysconfig/modules/ipvs.modules文件,保證在節點重啓後能自動加載所需模塊。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已經正確加載所需的內核模塊。
接下來還需要確保各個節點上已經安裝了ipset軟件包。 爲了便於查看ipvs的代理規則,最好安裝一下管理工具ipvsadm。
# yum install ipset ipvsadm -y
安裝Docker(這個可以提前自己安裝–也可以選用我之前的方案離線安裝+阿里雲鏡像加速)
Kubernetes默認的容器運行時仍然是Docker,使用的是kubelet中內置dockershim CRI實現。需要注意的是,Kubernetes 1.13最低支持的Docker版本是1.11.1,最高支持是18.06,而Docker最新版本已經是18.09了,故我們安裝時需要指定版本爲18.06.1-ce。
#配置docker yum源
yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#安裝指定版本,這裏安裝18.06
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce-18.06.1.ce-3.el7
systemctl start docker && systemctl enable docker
腳本安裝docker-ce並配置daocloud鏡像加速(可選):
bash Install_docker-ce.sh
安裝kubeadm、kubelet、kubectl(master與node都要執行)
官方安裝文檔可以參考:
https://kubernetes.io/docs/setup/independent/install-kubeadm/
- kubelet 在羣集中所有節點上運行的核心組件, 用來執行如啓動pods和containers等操作。
- kubeadm 引導啓動k8s集羣的命令行工具,用於初始化 Cluster。
- kubectl 是 Kubernetes 命令行工具。通過 kubectl 可以部署和管理應用,查看各種資源,創建、刪除和更新各種組件。
#配置kubernetes.repo的源,由於官方源國內無法訪問,這裏使用阿里雲yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#在所有節點上安裝指定版本 kubelet、kubeadm 和 kubectl
yum install -y kubelet-1.13.1 kubeadm-1.13.1 kubectl-1.13.1
#啓動kubelet服務
systemctl enable kubelet && systemctl start kubelet
2.部署master節點
完整的官方文檔可以參考:
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
Master節點執行初始化:
注意這裏執行初始化用到了- -image-repository選項,指定初始化需要的鏡像源從阿里雲鏡像倉庫拉取。
kubeadm init \
--apiserver-advertise-address=192.168.217.131 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.13.1 \
--pod-network-cidr=10.244.0.0/16
初始化命令說明:
--apiserver-advertise-address
指明用 Master 的哪個 interface 與 Cluster 的其他節點通信。如果 Master 有多個 interface,建議明確指定,如果不指定,kubeadm 會自動選擇有默認網關的 interface。
--pod-network-cidr
指定 Pod 網絡的範圍。Kubernetes 支持多種網絡方案,而且不同網絡方案對 –pod-network-cidr 有自己的要求,這裏設置爲 10.244.0.0/16 是因爲我們將使用 flannel 網絡方案,必須設置成這個 CIDR。
--image-repository
Kubenetes默認Registries地址是 k8s.gcr.io,在國內並不能訪問 gcr.io,在1.13版本中我們可以增加–image-repository參數,默認值是 k8s.gcr.io,將其指定爲阿里雲鏡像地址:registry.aliyuncs.com/google_containers。
--kubernetes-version=v1.13.1
關閉版本探測,因爲它的默認值是stable-1,會導致下載最新版本,這裏固定版本1.13.1的
初始化過程如下:
[root@k8s-master ~]# kubeadm init \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.13.1 \
> --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.217.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.217.131 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.217.131]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.009858 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 60syk6.vnplamkn3zhwu3s3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.217.131:6443 --token 60syk6.vnplamkn3zhwu3s3 --discovery-token-ca-cert-hash sha256:7d50e704bbfe69661e37c5f3ad13b1b88032b6b2b703ebd4899e259477b5be69
[root@k8s-master ~]#
(注意記錄下初始化結果中的kubeadm join命令,部署worker節點時會用到)
初始化過程說明:
- [preflight] kubeadm 執行初始化前的檢查。
- [kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”
- [certificates] 生成相關的各種token和證書
- [kubeconfig] 生成 KubeConfig 文件,kubelet 需要這個文件與 Master 通信
- [control-plane] 安裝 Master 組件,會從指定的 Registry 下載組件的 Docker 鏡像。
- [bootstraptoken] 生成token記錄下來,後邊使用kubeadm join往集羣中添加節點時會用到
- [addons] 安裝附加組件 kube-proxy 和 kube-dns。
- Kubernetes Master 初始化成功,提示如何配置常規用戶使用kubectl訪問集羣。
- 提示如何安裝 Pod 網絡。
- 提示如何註冊其他節點到 Cluster。
配置 kubectl(master執行)
kubectl 是管理 Kubernetes Cluster 的命令行工具,前面我們已經在所有的節點安裝了 kubectl。Master 初始化完成後需要做一些配置工作,然後 kubectl 就能使用了。
依照 kubeadm init 輸出的最後提示
#追加sudo權限,並配置sudo免密
sed -i '/^root/a\centos ALL=(ALL) NOPASSWD:ALL' /etc/sudoers
#保存集羣安全配置文件到當前用戶.kube目錄
su - centos
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#啓用 kubectl 命令自動補全功能(註銷重新登錄生效)
echo "source <(kubectl completion bash)" >> ~/.bashrc
需要這些配置命令的原因是:Kubernetes 集羣默認需要加密方式訪問。所以,這幾條命令,就是將剛剛部署生成的 Kubernetes 集羣的安全配置文件,保存到當前用戶的.kube 目錄下,kubectl 默認會使用這個目錄下的授權信息訪問 Kubernetes 集羣。
如果不這麼做的話,我們每次都需要通過 export KUBECONFIG 環境變量告訴 kubectl 這個安全配置文件的位置。
配置完成後centos用戶就可以使用 kubectl 命令管理集羣了。
查看集羣狀態:
[root@k8s-master ~]$ kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
[centos@k8s-master ~]$
確認各個組件都處於healthy狀態。
查看節點狀態
[root@k8s-master ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 36m v1.13.1
[centos@k8s-master ~]$
可以看到,當前只存在1個master節點,並且這個節點的狀態是 NotReady。
使用 kubectl describe 命令來查看這個節點(Node)對象的詳細信息、狀態和事件(Event):
[root@k8s-master ~]$ kubectl describe node k8s-master
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 33m kubelet, k8s-master Starting kubelet.
Normal NodeHasSufficientMemory 33m (x8 over 33m) kubelet, k8s-master Node k8s-master status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 33m (x8 over 33m) kubelet, k8s-master Node k8s-master status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 33m (x7 over 33m) kubelet, k8s-master Node k8s-master status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 33m kubelet, k8s-master Updated Node Allocatable limit across pods
Normal Starting 33m kube-proxy, k8s-master Starting kube-proxy.
通過 kubectl describe 指令的輸出,我們可以看到 NodeNotReady 的原因在於,我們尚未部署任何網絡插件,kube-proxy等組件還處於starting狀態。
另外,我們還可以通過 kubectl 檢查這個節點上各個系統 Pod 的狀態,其中,kube-system 是 Kubernetes 項目預留的系統 Pod 的工作空間(Namepsace,注意它並不是 Linux Namespace,它只是 Kubernetes 劃分不同工作空間的單位):
[centos@k8s-master ~]$ kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-78d4cf999f-7jdx7 0/1 Pending 0 29m <none> <none> <none> <none>
coredns-78d4cf999f-s6mhk 0/1 Pending 0 29m <none> <none> <none> <none>
etcd-k8s-master 1/1 Running 0 34m 192.168.217.131 k8s-master <none> <none>
kube-apiserver-k8s-master 1/1 Running 0 34m 192.168.217.131 k8s-master <none> <none>
kube-controller-manager-k8s-master 1/1 Running 0 34m 192.168.217.131 k8s-master <none> <none>
kube-proxy-przwf 1/1 Running 0 34m 192.168.217.131 k8s-master <none> <none>
kube-scheduler-k8s-master 1/1 Running 0 34m 192.168.217.131 k8s-master <none> <none>
[centos@k8s-master ~]$
可以看到,CoreDNS依賴於網絡的 Pod 都處於 Pending 狀態,即調度失敗。這當然是符合預期的:因爲這個 Master 節點的網絡尚未就緒。
集羣初始化如果遇到問題,可以使用kubeadm reset命令進行清理然後重新執行初始化。
部署網絡插件
要讓 Kubernetes Cluster 能夠工作,必須安裝 Pod 網絡,否則 Pod 之間無法通信。
Kubernetes 支持多種網絡方案,這裏我們使用 flannel
執行如下命令部署 flannel:
kubectl apply -f kube-flannel.yml
kube-flannel.yml文件
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.10.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.10.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: arm64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.10.0-arm64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.10.0-arm64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: arm
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.10.0-arm
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.10.0-arm
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: ppc64le
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.10.0-ppc64le
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.10.0-ppc64le
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: s390x
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.10.0-s390x
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.10.0-s390x
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
[centos@k8s-master ~]$ kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
[centos@k8s-master ~]$
部署完成後,我們可以通過 kubectl get 重新檢查 Pod 的狀態(如果沒running狀態可以再次應用或者等待一段時間):
[centos@k8s-master ~]$ kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-78d4cf999f-7jdx7 1/1 Running 0 11h 10.244.0.3 k8s-master <none> <none>
coredns-78d4cf999f-s6mhk 1/1 Running 0 11h 10.244.0.2 k8s-master <none> <none>
etcd-k8s-master 1/1 Running 1 11h 192.168.217.131 k8s-master <none> <none>
kube-apiserver-k8s-master 1/1 Running 1 11h 192.168.217.131 k8s-master <none> <none>
kube-controller-manager-k8s-master 1/1 Running 1 11h 192.168.217.131 k8s-master <none> <none>
kube-flannel-ds-amd64-lkf2f 1/1 Running 0 10h 192.168.217.131 k8s-master <none> <none>
kube-proxy-przwf 1/1 Running 1 11h 192.168.217.131 k8s-master <none> <none>
kube-scheduler-k8s-master 1/1 Running 1 11h 192.168.217.131 k8s-master <none> <none>
[centos@k8s-master ~]$
可以看到,所有的系統 Pod 都成功啓動了,而剛剛部署的flannel網絡插件則在 kube-system 下面新建了一個名叫kube-flannel-ds-amd64-lkf2f的 Pod,一般來說,這些 Pod 就是容器網絡插件在每個節點上的控制組件。
Kubernetes 支持容器網絡插件,使用的是一個名叫 CNI 的通用接口,它也是當前容器網絡的事實標準,市面上的所有容器網絡開源項目都可以通過 CNI 接入 Kubernetes,比如 Flannel、Calico、Canal、Romana 等等,它們的部署方式也都是類似的“一鍵部署”。
再次查看master節點狀態已經爲ready狀態:
[centos@k8s-master ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 11h v1.13.1
[centos@k8s-master ~]$
至此,Kubernetes 的 Master 節點就部署完成了。如果你只需要一個單節點的 Kubernetes,現在你就可以使用了。不過,在默認情況下,Kubernetes 的 Master 節點是不能運行用戶 Pod 的。
3.部署worker(node)節點
Kubernetes 的 Worker 節點跟 Master 節點幾乎是相同的,它們運行着的都是一個 kubelet 組件。唯一的區別在於,在 kubeadm init 的過程中,kubelet 啓動後,Master 節點上還會自動運行 kube-apiserver、kube-scheduler、kube-controller-manger 這三個系統 Pod。
在 k8s-node1 和 k8s-node2 上分別執行如下命令,將其註冊到 Cluster 中:
#執行以下命令將節點接入集羣
kubeadm join 192.168.217.131:6443 --token 67kq55.8hxoga556caxty7s --discovery-token-ca-cert-hash sha256:7d50e704bbfe69661e37c5f3ad13b1b88032b6b2b703ebd4899e259477b5be69
#如果執行kubeadm init時沒有記錄下加入集羣的命令,可以通過以下命令重新創建
kubeadm token create --print-join-command
在k8s-node1上執行kubeadm join :
[root@k8s-node1 ~]# kubeadm join 192.168.217.131:6443 --token 67kq55.8hxoga556caxty7s --discovery-token-ca-cert-hash sha256:7d50e704bbfe69661e37c5f3ad13b1b88032b6b2b703ebd4899e259477b5be69
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.217.131:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.217.131:6443"
[discovery] Requesting info from "https://192.168.217.131:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.217.131:6443"
[discovery] Successfully established connection with API Server "192.168.217.131:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node1" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
[root@k8s-node1 ~]#
重複執行以上操作將k8s-node2也加進去(注意重新執行kubeadm token create –print-join-command)。
然後根據提示,我們可以通過 kubectl get nodes 查看節點的狀態:
[centos@k8s-master ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 11h v1.13.1
k8s-node1 Ready <none> 24m v1.13.1
k8s-node2 Ready <none> 4m9s v1.13.1
[centos@k8s-master ~]$
nodes狀態全部爲ready,由於每個節點都需要啓動若干組件,如果node節點的狀態是 NotReady,可以查看所有節點pod狀態,確保所有pod成功拉取到鏡像並處於running狀態:
[centos@k8s-master ~]$ kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-78d4cf999f-7jdx7 1/1 Running 0 11h 10.244.0.3 k8s-master <none> <none>
kube-system coredns-78d4cf999f-s6mhk 1/1 Running 0 11h 10.244.0.2 k8s-master <none> <none>
kube-system etcd-k8s-master 1/1 Running 1 12h 192.168.217.131 k8s-master <none> <none>
kube-system kube-apiserver-k8s-master 1/1 Running 1 12h 192.168.217.131 k8s-master <none> <none>
kube-system kube-controller-manager-k8s-master 1/1 Running 1 12h 192.168.217.131 k8s-master <none> <none>
kube-system kube-flannel-ds-amd64-d2r8p 1/1 Running 0 6m43s 192.168.58.102 k8s-node2 <none> <none>
kube-system kube-flannel-ds-amd64-d85c6 1/1 Running 0 27m 192.168.58.101 k8s-node1 <none> <none>
kube-system kube-flannel-ds-amd64-lkf2f 1/1 Running 0 11h 192.168.217.131 k8s-master <none> <none>
kube-system kube-proxy-k8jx8 1/1 Running 0 6m43s 192.168.58.102 k8s-node2 <none> <none>
kube-system kube-proxy-n95ck 1/1 Running 0 27m 192.168.58.101 k8s-node1 <none> <none>
kube-system kube-proxy-przwf 1/1 Running 1 12h 192.168.217.131 k8s-master <none> <none>
kube-system kube-scheduler-k8s-master 1/1 Running 1 12h 192.168.217.131 k8s-master <none> <none>
[centos@k8s-master ~]$
這時,所有的節點都已經 Ready,Kubernetes Cluster 創建成功,一切準備就緒。
如果pod狀態爲Pending、ContainerCreating、ImagePullBackOff 都表明 Pod 沒有就緒,Running 纔是就緒狀態。
如果有pod提示Init:ImagePullBackOff,說明這個pod的鏡像在對應節點上拉取失敗,我們可以通過 kubectl describe pod 查看 Pod 具體情況,以確認拉取失敗的鏡像:
[centos@k8s-master ~]$ kubectl describe pod kube-flannel-ds-amd64-d2r8p --namespace=kube-system
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m14s default-scheduler Successfully assigned kube-system/kube-flannel-ds-amd64-lzx5v to k8s-node2
Warning Failed 109s kubelet, k8s-node2 Failed to pull image "quay.io/coreos/flannel:v0.10.0-amd64": rpc error: code = Unknown desc = Error response from daemon: Get https://quay.io/v2/: net/http: TLS handshake timeout
Warning Failed 109s kubelet, k8s-node2 Error: ErrImagePull
Normal BackOff 108s kubelet, k8s-node2 Back-off pulling image "quay.io/coreos/flannel:v0.10.0-amd64"
Warning Failed 108s kubelet, k8s-node2 Error: ImagePullBackOff
Normal Pulling 94s (x2 over 2m6s) kubelet, k8s-node2 pulling image "quay.io/coreos/flannel:v0.10.0-amd64"
這裏看最後events輸出內容,可以看到在下載 image 時失敗,如果網絡質量不好,這種情況是很常見的。我們可以耐心等待,因爲 Kubernetes 會重試,我們也可以自己手工執行 docker pull 去下載這個鏡像。
[root@k8s-node2 ~]# docker pull quay.io/coreos/flannel:v0.10.0-amd64
v0.10.0-amd64: Pulling from coreos/flannel
ff3a5c916c92: Already exists
8a8433d1d437: Already exists
306dc0ee491a: Already exists
856cbd0b7b9c: Already exists
af6d1e4decc6: Already exists
Digest: sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
Status: Image is up to date for quay.io/coreos/flannel:v0.10.0-amd64
[root@k8s-node2 ~]#
如果無法從 quay.io/coreos/flannel:v0.10.0-amd64 下載鏡像,可以從阿里雲或者dockerhub鏡像倉庫下載,然後改回原來的tag即可:
docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.10.0-amd64
docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker rmi registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.10.0-amd64
查看master節點下載了哪些鏡像:
[centos@k8s-master ~]$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.13.1 fdb321fd30a0 2 weeks ago 80.2MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.13.1 40a63db91ef8 2 weeks ago 181MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.13.1 ab81d7360408 2 weeks ago 79.6MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.13.1 26e6f1db2a52 2 weeks ago 146MB
registry.aliyuncs.com/google_containers/coredns 1.2.6 f59dcacceff4 8 weeks ago 40MB
registry.aliyuncs.com/google_containers/etcd 3.2.24 3cab8e1b9802 3 months ago 220MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 11 months ago 44.6MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 12 months ago 742kB
[centos@k8s-master ~]$
查看node節點下載了哪些鏡像:
[root@k8s-node1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.13.1 fdb321fd30a0 2 weeks ago 80.2MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 11 months ago 44.6MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 12 months ago 742kB
[root@k8s-node1 ~]#
測試集羣各個組件
首先驗證kube-apiserver, kube-controller-manager, kube-scheduler, pod network 是否正常:
部署一個 Nginx Deployment,包含2個Pod
參考:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
[centos@k8s-master ~]$ kubectl create deployment nginx --image=nginx:alpine
deployment.apps/nginx created
[centos@k8s-master ~]$ kubectl scale deployment nginx --replicas=2
deployment.extensions/nginx scaled
[centos@k8s-master ~]$
驗證Nginx Pod是否正確運行,並且會分配10.244.開頭的集羣IP
[centos@k8s-master ~]$ kubectl get pods -l app=nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-54458cd494-p2qgx 1/1 Running 0 111s 10.244.1.2 k8s-node1 <none> <none>
nginx-54458cd494-sdlm7 1/1 Running 0 103s 10.244.2.2 k8s-node2 <none> <none>
[centos@k8s-master ~]$
再驗證一下kube-proxy是否正常:
以 NodePort 方式對外提供服務
參考:https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
[centos@k8s-master ~]$ kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[centos@k8s-master ~]$ kubectl get services nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.108.17.2 <none> 80:30670/TCP 12s
[centos@k8s-master ~]$
可以通過任意 NodeIP:Port 在集羣外部訪問這個服務:
[centos@k8s-master ~]$ curl 192.168.217.131:30670
[centos@k8s-master ~]$ curl 192.168.58.102:30670
[centos@k8s-master ~]$ curl 192.168.58.101:30670
訪問k8s-master ip
訪問k8s-node1 ip
訪問k8s-node2 ip
最後驗證一下dns, pod network是否正常:
運行Busybox並進入交互模式
[centos@k8s-master ~]$ kubectl run -it curl --image=radial/busyboxplus:curl
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
[ root@curl-66959f6557-s5qqs:/ ]$
輸入nslookup nginx
查看是否可以正確解析出集羣內的IP,以驗證DNS是否正常
[ root@curl-66959f6557-s5qqs:/ ]$ nslookup nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx
Address 1: 10.108.17.2 nginx.default.svc.cluster.local
通過服務名進行訪問,驗證kube-proxy是否正常
[ root@curl-66959f6557-q472z:/ ]$ curl http://nginx/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
......
</body>
</html>
[ root@curl-66959f6557-q472z:/ ]$
分別訪問一下2個Pod的內網IP,驗證跨Node的網絡通信是否正常
[ root@curl-66959f6557-s5qqs:/ ]$ curl 10.244.1.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
......
</body>
</html>
[ root@curl-66959f6557-s5qqs:/ ]$ curl 10.244.2.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
......
</body>
</html>
[ root@curl-66959f6557-s5qqs:/ ]$
Pod調度到Master節點(可以不做)
出於安全考慮,默認配置下Kubernetes不會將Pod調度到Master節點。查看Taints字段默認配置:
[centos@k8s-master ~]$ kubectl describe node k8s-master
......
Taints: node-role.kubernetes.io/master:NoSchedule
如果希望將k8s-master也當作Node節點使用,可以執行如下命令,其中k8s-master是主機節點hostname:
kubectl taint node k8s-master node-role.kubernetes.io/master-
修改後Taints字段狀態:
[centos@k8s-master ~]$ kubectl describe node k8s-master
......
Taints: <none>
如果要恢復Master Only狀態,執行如下命令:
kubectl taint node k8s-master node-role.kubernetes.io/master=""
kube-proxy開啓ipvs
修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”:
[centos@k8s-master ~]$ kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited
之後重啓各個節點上的kube-proxy pod:
[centos@k8s-master ~]$ kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-2w9sh" deleted
pod "kube-proxy-gw4lx" deleted
pod "kube-proxy-thv4c" deleted
[centos@k8s-master ~]$ kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-6qlgv 1/1 Running 0 65s
kube-proxy-fdtjd 1/1 Running 0 47s
kube-proxy-m8zkx 1/1 Running 0 52s
[centos@k8s-master ~]$
查看日誌:
[centos@k8s-master ~]$ kubectl logs kube-proxy-6qlgv -n kube-system
I1213 09:50:15.414493 1 server_others.go:189] Using ipvs Proxier.
W1213 09:50:15.414908 1 proxier.go:365] IPVS scheduler not specified, use rr by default
I1213 09:50:15.415021 1 server_others.go:216] Tearing down inactive rules.
I1213 09:50:15.461658 1 server.go:464] Version: v1.13.0
I1213 09:50:15.467827 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1213 09:50:15.467997 1 config.go:202] Starting service config controller
I1213 09:50:15.468010 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I1213 09:50:15.468092 1 config.go:102] Starting endpoints config controller
I1213 09:50:15.468100 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I1213 09:50:15.568766 1 controller_utils.go:1034] Caches are synced for endpoints config controller
I1213 09:50:15.568950 1 controller_utils.go:1034] Caches are synced for service config controller
[centos@k8s-master ~]$
日誌中打印出了Using ipvs Proxier,說明ipvs模式已經開啓。
由於本人初學kubernetes:以上是結合大白老師的文檔實測安裝成功,過程有曲折,但是肯定能安裝成功;有問題可以問我。