Kubeadm部署--單主集羣

官方提供的三種部署方式

1.minikube

Minikube是一個工具,可以在本地快速運行一個單點的Kubernetes,僅用於嘗試Kubernetes或日常開發的用戶使用。

部署地址:https://kubernetes.io/docs/setup/minikube/

2.kubeadm

Kubeadm也是一個工具,提供kubeadm init和kubeadm join,用於快速部署Kubernetes集羣。

部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

3.二進制包

推薦,從官方下載發行版的二進制包,手動部署每個組件,組成Kubernetes集羣。

部署地址:https://github.com/kubernetes/kubernetes/releases

kubeadm部署介紹

Kubeadm是一個工具,它提供kubeadm initkubeadm join作爲創建Kubernetes集羣的最佳實踐“快捷路徑”。

kubeadm執行必要的操作來啓動和運行最小可行集羣。按照設計,它只關心引導,而不關心配置機器。同樣,安裝各種漂亮的插件(比如Kubernetes Dashboard、監控解決方案和特定於雲的插件)也不在討論範圍之內。

相反,我們期望在kubeadm的基礎上構建更高級、更定製化的工具,理想情況下,使用kubeadm作爲所有部署的基礎將使創建符合規範的集羣變得更容易。

環境介紹

主機名 角色 配置 系統 外網ip 內網ip 內核版本
k8s-m01 master 2核4G c7.7 10.0.0.61 172.16.1.61 4.4.224-1.el7.elrepo.x86_64
k8s-m02 node 2核4G c7.7 10.0.0.62 172.16.1.62 4.4.224-1.el7.elrepo.x86_64

1.初始化準備

1. 設置主機名

hostnamectl set-hostname k8s-m01
hostnamectl set-hostname k8s-m02

2.添加hosts解析

cat >>/etc/hosts<<EOF
10.0.0.61 k8s-m01
10.0.0.62 k8s-m02
EOF

3.時間同步

echo "*/5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com >/dev/null 2>&1" >/var/spool/cron/root

4、加載並優化內核參數

cat >/etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

modprobe ip_vs_rr
modprobe br_netfilter
sysctl -p /etc/sysctl.d/kubernetes.conf


#注:tcp_tw_recycle 和 Kubernetes 的 NAT 衝突,必須關閉 ,否則會導致服務不通; 關閉不使用的 IPV6 協議棧,防止觸發 docker BUG;

#報錯:sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables;解決措施:modprobe br_netfilter

5、關閉swap分區

如果開啓了swap分區,kubelet會啓動失敗(可以通過將參數 --fail-swap-on 設置爲false來忽略swap on),故需要在每個node節點機器上關閉swap分區。

swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

6、關閉並禁用firewalld及selinux

在每臺機器上關閉防火牆,清理防火牆規則,設置默認轉發策略

systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEPT
setenforce 0
sed -i  '/^SELINUX/s#enforcing#disabled#g' /etc/selinux/config

7、安裝docker

yum install -y yum-utils device-mapper-persistent-data lvm2
wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache
yum install docker-ce-18.06.3.ce -y

mkdir -p /etc/docker/
cat >/etc/docker/daemon.json<<EOF
{
    "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com","https://dockerhub.azk8s.cn"],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "max-concurrent-downloads": 20,
    "live-restore": true,
    "storage-driver": "overlay2",
    "max-concurrent-uploads": 10,
    "debug": true,
    "log-opts": {
    "max-size": "100m",
    "max-file": "10"
    }
}
EOF

systemctl daemon-reload
systemctl restart docker
systemctl enable docker

8、升級內核

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install kernel-lt -y
grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg
reboot

2 kubeadm部署集羣

2.1安裝kubelet kubeadm kubectl

所有節點上(k8s-m01, k8s-m02)安裝kubelet kubeadm kubectl
所有節點都安裝 kubeadm、kubelet、kubectl,注意:node節點的kubectl不是必須的。
kubeadm: 集羣管理命令。
kubelet: 該組件運行在集羣中的所有機器上,執行啓動pod和容器等操作。
kubectl: 與集羣通信的命令行工具。
kubeadm不會爲您安裝或管理kubelet或kubectl,因此你需要確保它們與kubeadm安裝的Kubernetes控制平面版本匹配。否則,就有發生版本傾斜的風險,這可能導致意外的錯誤行爲。 然而,kubelet和控制平面之間的一個小版本傾斜是允許的,但是kubelet版本可能永遠不會超過API服務器版本。例如,運行1.7.0的kubelets應該完全兼容1.8.0 API服務器,反之則不然。


# 配置kubernetes鏡像源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
 
# 安裝kubelet kubeadm kubectl
yum install -y kubelet kubeadm kubectl –disableexcludes=kubernetes
 
# 啓動kubelet kubeadm kubectl,並設置開機自啓
systemctl enable kubelet && systemctl start kubelet

#ps: 安裝-由於k8s更新很快,建議制定需要的版本
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
systemctl enable --now kubelet

2.2下載鏡像

k8s-m01-k8s-m02節點上執行如下命令獲取下載鏡像所需列表

#獲取所需的鏡像列表
[root@k8s-master ~]# kubeadm config images list
W0523 23:05:07.985769   44308 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

由於國內無法訪問k8s.gcr.io鏡像倉庫,先從daocloud.io鏡像倉庫下載所需鏡像,然後修改鏡像標籤

所有節點上(k8s-master, k8s-node1, k8s-node2)下載安裝kubernetes集羣所需鏡像

# 下載鏡像
docker pull daocloud.io/daocloud/kube-apiserver:v1.18.3
docker pull daocloud.io/daocloud/kube-controller-manager:v1.18.3
docker pull daocloud.io/daocloud/kube-scheduler:v1.18.3
docker pull daocloud.io/daocloud/kube-proxy:v1.18.3
docker pull daocloud.io/daocloud/pause:3.2
docker pull daocloud.io/daocloud/etcd:3.4.3-0
docker pull daocloud.io/daocloud/coredns:1.6.7


# 給鏡像打tag
docker tag daocloud.io/daocloud/kube-proxy:v1.18.3 k8s.gcr.io/kube-proxy:v1.18.3 
docker tag daocloud.io/daocloud/kube-apiserver:v1.18.3 k8s.gcr.io/kube-apiserver:v1.18.3
docker tag daocloud.io/daocloud/kube-controller-manager:v1.18.3 k8s.gcr.io/kube-controller-manager:v1.18.3
docker tag daocloud.io/daocloud/kube-scheduler:v1.18.3 k8s.gcr.io/kube-scheduler:v1.18.3
docker tag daocloud.io/daocloud/pause:3.2 k8s.gcr.io/pause:3.2
docker tag daocloud.io/daocloud/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
docker tag daocloud.io/daocloud/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0


# 清理原鏡像
docker rmi daocloud.io/daocloud/kube-apiserver:v1.18.3
docker rmi daocloud.io/daocloud/kube-controller-manager:v1.18.3
docker rmi daocloud.io/daocloud/kube-scheduler:v1.18.3
docker rmi daocloud.io/daocloud/kube-proxy:v1.18.3
docker rmi daocloud.io/daocloud/pause:3.2
docker rmi daocloud.io/daocloud/etcd:3.4.3-0
docker rmi daocloud.io/daocloud/coredns:1.6.7



查看鏡像

[root@k8s-m01 ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.18.3             3439b7546f29        3 days ago          117MB
k8s.gcr.io/kube-scheduler            v1.18.3             76216c34ed0c        3 days ago          95.3MB
k8s.gcr.io/kube-apiserver            v1.18.3             7e28efa976bd        3 days ago          173MB
k8s.gcr.io/kube-controller-manager   v1.18.3             da26705ccb4b        3 days ago          162MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        3 months ago        683kB
k8s.gcr.io/coredns                   1.6.7               67da37a9a360        3 months ago        43.8MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        7 months ago        288MB

2.3初始化k8sm01節點

在k8s-m01節點上執行初始化操作

#初始化master
[root@k8s-m01 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address 10.0.0.61

日誌

[root@k8s-m01 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address 10.0.0.61
W0524 00:29:06.536045    3987 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-m01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.61]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-m01 localhost] and IPs [10.0.0.61 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-m01 localhost] and IPs [10.0.0.61 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0524 00:29:09.542684    3987 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0524 00:29:09.543513    3987 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.502093 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-m01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-m01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: yxajl5.miophf2rv2ysodst
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
#####################加入node節點使用
kubeadm join 10.0.0.61:6443 --token yxajl5.miophf2rv2ysodst \
    --discovery-token-ca-cert-hash sha256:8d59d2757c6af60bd70ae998056fde9eee6a283f03c581bba1f09ff497db42ca 

#有報錯  
[WARNING Hostname]: hostname "k8s-master" could not be reached
[WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 223.5.5.5:53: no such host

#沒有做域名解析
[root@k8s-master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

在k8s-m01節點上按照提示執行如下命令

[root@k8s-m01 ~]# mkdir -p $HOME/.kube
[root@k8s-m01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-m01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

3.安裝網絡插件

這裏選擇安裝flannel網絡插件,也可以安裝其他網絡插件。master節點上安裝flannel網絡插件

3.1、簡介

Flannel是一種基於overlay網絡的跨主機容器網絡解決方案,也就是將TCP數據包封裝在另一種網絡包裏面進行路由轉發和通信,Flannel是CoreOS開發,專門用於docker多機互聯的一個工具,讓集羣中的不同節點主機創建的容器都具有全集羣唯一的虛擬ip地址。

Flannel使用go語言編寫。

3.2、Flannel實現原理

Flannel爲每個host分配一個subnet,容器從這個subnet中分配IP,這些IP可以在host間路由,容器間無需使用nat和端口映射即可實現跨主機通信。

每個subnet都是從一個更大的IP池中劃分的,flannel會在每個主機上運行一個叫flanneld的agent,其職責就是從池子中分配subnet。

Flannel使用etcd存放網絡配置、已分配 的subnet、host的IP等信息。

Flannel數據包在主機間轉發是由backend實現的,目前已經支持UDP、VxLAN、host-gw、AWS VPC和GCE路由等多種backend。


# 下載kube-flannel.yaml
[root@k8s-m01 ~]# curl -o kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

#修改配置文件
[root@k8s-m01 ~]# vim kube-flannel.yml 
#添加
    containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=ens33  #模塊裏面添加這一行  如果有兩塊網卡 這裏需要配置下

 
# 安裝flannel插件
kubectl apply -f kube-flannel.yml

 
# 確認pod狀態,直到所有pod變爲running
kubectl get pod --all-namespaces
#可以多執行幾遍 直到全部running

pod狀態

[root@k8s-m01 ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE     IP           NODE      NOMINATED NODE   READINESS GATES
kube-system   coredns-66bff467f8-hplhd          0/1     Running   0          4m57s   10.244.0.2   k8s-m01   <none>           <none>
kube-system   coredns-66bff467f8-mv4qs          0/1     Running   0          4m57s   10.244.0.3   k8s-m01   <none>           <none>
kube-system   etcd-k8s-m01                      1/1     Running   0          5m9s    10.0.0.61    k8s-m01   <none>           <none>
kube-system   kube-apiserver-k8s-m01            1/1     Running   0          5m9s    10.0.0.61    k8s-m01   <none>           <none>
kube-system   kube-controller-manager-k8s-m01   1/1     Running   0          5m9s    10.0.0.61    k8s-m01   <none>           <none>
kube-system   kube-flannel-ds-amd64-blrl5       1/1     Running   0          84s     10.0.0.61    k8s-m01   <none>           <none>
kube-system   kube-proxy-h5jtp                  1/1     Running   0          4m57s   10.0.0.61    k8s-m01   <none>           <none>
kube-system   kube-scheduler-k8s-m01            1/1     Running   0          5m9s    10.0.0.61    k8s-m01   <none>           <none>


[root@k8s-m01 ~]# kubectl get nodes
NAME      STATUS   ROLES    AGE     VERSION
k8s-m01   Ready    master   5m31s   v1.18.3

[root@k8s-m01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   5m36s

[root@k8s-m01 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   

3.4 加入node節點

節點(node)是工作負載(容器和pod等)運行的地方,如果要向集羣添加新節點,請爲每臺機器執行以下操作:

  • node節點執行 1.1.2 初始化準備
  • 下載安裝鏡像
  • 執行kubeadm join

master初始化成功時,屏幕日誌會輸出加入節點的命令

###########################################################
#加入node節點時使用  目錄6日誌最後一行

kubeadm join 10.0.0.46:6443 --token 8x9g55.hptbtbj66oue1tcd \
    --discovery-token-ca-cert-hash sha256:1586594765a9f79dae51732b3ca4c902ec37482dd16cd4bb3a37d60527aceded 
############################################################

# 注意:token有24H的有效期;失效後執行以下命令,重新生成了加入node節點的命令
[root@k8s-master ~]# kubeadm token create --print-join-command
W0523 23:23:23.037975   48662 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups fig.k8s.io]
kubeadm join 10.0.0.46:6443 --token r8sxu7.ndl3j8h6pz5tnoc5     --discovery-token-ca-cert-hash sha256:1586594765a93a37d60527aceded 

**ps:不知道令牌,可以在主節點上運行以該命令:**
```bash
[root@k8s-master ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                           EXTRA GROUPS
8x9g55.hptbtbj66oue1tcd   23h         2020-05-24T23:16:57+08:00   authentication,signing   The default bootstrap tgenerated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

3.4.1.下載安裝鏡像

node上也是需要下載安裝一些鏡像的,需要下載的鏡像爲:kube-proxy、pause、flannel

[root@k8s-m02 ~]# docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy    v1.18.3             3439b7546f29        3 days ago          117MB
quay.io/coreos/flannel   v0.12.0-s390x       57eade024bfb        2 months ago        56.9MB
quay.io/coreos/flannel   v0.12.0-ppc64le     9225b871924d        2 months ago        70.3MB
quay.io/coreos/flannel   v0.12.0-arm64       7cf4a417daaa        2 months ago        53.6MB
quay.io/coreos/flannel   v0.12.0-arm         767c3d1f8cba        2 months ago        47.8MB
quay.io/coreos/flannel   v0.12.0-amd64       4e9f801d2217        2 months ago        52.8MB
k8s.gcr.io/pause         3.2                 80d28bedfe5d        3 months ago        683kB

3.4.2.cm02節點上執行加入節點命令

[root@k8s-node1 ~]# kubeadm join 10.0.0.46:6443 --token 6o3a44.6w2jvn8sf50d2y24     --discovery-token-ca-cert-hash 

#不知道令牌,可以在主節點上運行以該命令:
kubeadm token list
# 注意:token有24H的有效期;失效後執行以下命令,重新生成了加入node節點的命令
kubeadm token create --print-join-command

3.5.確認集羣狀態

kubernetes集羣安裝完成,確認集羣狀態

#確認確認node狀態
[root@k8s-m01 ~]# kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
k8s-m01   Ready    master   20m   v1.18.3
k8s-m02   Ready    <none>   19s   v1.18.3

# 如果狀態爲NotReady;查看一下pods的狀態並檢查:
[root@k8s-m01 ~]# kubectl get pods -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-gpz5f          1/1     Running   0          25m
coredns-66bff467f8-pfzc6          1/1     Running   0          25m
etcd-k8s-m01                      1/1     Running   0          26m
kube-apiserver-k8s-m01            1/1     Running   0          26m
kube-controller-manager-k8s-m01   1/1     Running   0          26m
kube-flannel-ds-amd64-r9tr8       1/1     Running   0          6m15s
kube-flannel-ds-amd64-xgln9       1/1     Running   0          19m
kube-proxy-6ljzp                  1/1     Running   0          6m15s
kube-proxy-nx2jf                  1/1     Running   0          25m
kube-scheduler-k8s-m01            1/1     Running   0          26m

3.驗證Kubernetes功能

注意:以下操作均在master上。

(1)創建測試文件

cat > nginx-ds.yml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: nginx-ds
  labels:
    app: nginx-ds
spec:
  type: NodePort
  selector:
    app: nginx-ds
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ds
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      app: nginx-ds
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: daocloud.io/library/nginx:latest
        ports:
        - containerPort: 80
EOF

執行測試:

kubectl create -f nginx-ds.yml

(2)檢查各節點的 Pod IP 連通性

[root@k8s-m01 ~]# kubectl get pods -o wide|grep nginx-ds
nginx-ds-47jt9   1/1     Running   0          11m   10.244.1.2   k8s-m02   <none>           <none>
[root@k8s-m01 ~]# ping 10.244.1.2
PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.
64 bytes from 10.244.1.2: icmp_seq=1 ttl=63 time=0.504 ms
64 bytes from 10.244.1.2: icmp_seq=2 ttl=63 time=0.381 ms
64 bytes from 10.244.1.2: icmp_seq=3 ttl=63 time=0.659 ms

(3)檢查服務 IP 和端口可達性

[root@k8s-m01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        43m
nginx-ds     NodePort    10.96.10.148   <none>        80:30635/TCP   12m

# - 可見nginx-ds的信息:
   
   #Service Cluster IP:10.96.0.1
   # 服務端口:80 
   # NodePort端口 :30635

(4)ping Pod IP(必要)

[root@k8s-m01 ~]# kubectl get pods -o wide|grep nginx-ds
nginx-ds-47jt9   1/1     Running   0          11m   10.244.1.2   k8s-m02   <none>           <none>
[root@k8s-m01 ~]# ping 10.244.1.2
PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.
64 bytes from 10.244.1.2: icmp_seq=1 ttl=63 time=0.436 ms
64 bytes from 10.244.1.2: icmp_seq=2 ttl=63 time=0.274 ms

(5)檢查服務 IP 和端口可達性(必要)

[root@k8s-m01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        50m
nginx-ds     NodePort    10.96.10.148   <none>        80:30635/TCP   19m


可見nginx-ds的信息:

Service Cluster IP:10.96.10.148
服務端口:80
NodePort 端口:30635

(6)測試nginx

[root@k8s-m01 ~]# curl -s 10.96.10.148
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

(7)檢查服務的 NodePort 可達(必要)

[root@k8s-m01 ~]# curl -Is 10.0.0.62:30635
HTTP/1.1 200 OK
Server: nginx/1.17.10
Date: Sat, 23 May 2020 18:28:31 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
Connection: keep-alive
ETag: "5e95c66e-264"
Accept-Ranges: bytes

3.2移除node節點

#cm01上執行
[root@k8s-m01 ~]# kubectl drain k8s-m02 --delete-local-data --force --ignore-daemonsets
node/k8s-m02 cordoned
WARNING: ignoring DaemonSet-managed Pods: default/nginx-ds-47jt9, kube-system/kube-flannel-ds-amd64-r9tr8, kube-system/kube-proxy-6ljzp
node/k8s-m02 drained
[root@k8s-m01 ~]# kubectl delete node k8s-m02
node "k8s-m02" deleted

#node節點上執行
[root@k8s-m02 ~]# kubeadm reset
[root@k8s-m02 ~]# systemctl stop kubelet
[root@k8s-m02 ~]# systemctl stop docker
[root@k8s-m02 ~]# rm -rf /var/lib/cni/ /var/lib/kubelet/* /etc/cni/
[root@k8s-m02 ~]# ifconfig cni0 down
[root@k8s-m02 ~]# ifconfig flannel.1 down
[root@k8s-m02 ~]# ifconfig docker0 down
[root@k8s-m02 ~]# ip link delete cni0
[root@k8s-m02 ~]# ip link delete flannel.1
[root@k8s-m02 ~]# systemctl start docker
[root@k8s-m02 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS 
#發現容器已經刪除

3.3重新加入node節點

#重新加入
#在刪除的m02對應的服務器上執行:kubeadm reset
[root@k8s-m02 ~]# kubeadm reset
[root@k8s-m02 ~]# kubeadm join 10.0.0.61:6443 --token 7jhad9.s0is6dhlihqtdpv6     --discovery-token-ca-cert-hash sha256:d96bb51fff52c696a31239d57ba64a6c128c2caa89496f643b3e32d2963b24b4 

#列出所有nodes:  kubectl get node
#刪除節點:kubectl delete node node3 
#查看對應node上的pods信息: kubectl get pods -o wide | grep node3

4.Dashboard

Dashboard 是基於網頁的 Kubernetes 用戶界面。您可以使用 Dashboard 將容器應用部署到 Kubernetes 集羣中,也可以對容器應用排錯,還能管理集羣本身及其附屬資源。您可以使用 Dashboard 獲取運行在集羣中的應用的概覽信息,也可以創建或者修改 Kubernetes 資源(如 Deployment,Job,DaemonSet 等等)。例如,您可以對 Deployment 實現彈性伸縮、發起滾動升級、重啓 Pod 或者使用嚮導創建新的應用。

在kubernetes Dashboard中可以查看集羣中應用的運行狀態,也能夠創建和修改各種kubernetes資源(比如Deployment,Job,Daemonset等等),用戶可以對Deployment實現彈性伸縮,執行滾動升級,重啓pod或者使用嚮導創建新的應用。

可以說,kubernetes Dashboard提供了kubectl的絕大部分功能。

    - 全面的羣集管理:命名空間,節點,窗格,副本集,部署,存儲,RBAC創建修改等 
   - 快速且始終如一的即時更新:無需刷新頁面即可查看最新信息
   - 快速查看集羣運行狀況:實時圖表可幫助快速跟蹤性能不佳的資源
   - 易於CRUD和擴展:加上內聯API文檔,可以輕鬆瞭解每個字段的作用
    - 簡單的OpenID集成:無需特殊代理 

Dashboard 同時展示了kubernetes集羣中的資源狀態信息和所有報錯信息。
官方參考文檔:https://kubernetes.io/zh/docs/tasks/access-application-cluster/web-ui-dashboard/

GitHub項目下載地址:https://github.com/kubernetes/dashboard

4.1下載 dashboard 文件

[root@k8s-m01 ~]# curl -o dashboard-recommended.yaml https://raw.githubusercontent.com/kubernetloy/recommended.yaml
[root@k8s-m01 ~]# ls
dashboard-recommended.yaml 
[root@k8s-m01 ~]# kubectl apply -f  dashboard-recommended.yaml

4.2查看運行狀態:

#多刷新幾次
[root@k8s-m01 ~]# kubectl get pods -n kubernetes-dashboard
NAME                                        READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-dc6947fbf-blnzh   1/1     Running   0          29s
kubernetes-dashboard-5d4dc8b976-79zpw       1/1     Running   0          29s

4.3查看kubernetes-dashboard所有信息:

[root@k8s-m01 ~]#  kubectl get all -n kubernetes-dashboard
NAME                                            READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-dc6947fbf-blnzh   1/1     Running   0          35s
pod/kubernetes-dashboard-5d4dc8b976-79zpw       1/1     Running   0          35s

NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/dashboard-metrics-scraper   ClusterIP   10.98.219.32     <none>        8000/TCP   35s
service/kubernetes-dashboard        ClusterIP   10.109.177.205   <none>        443/TCP    35s

NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dashboard-metrics-scraper   1/1     1            1           35s
deployment.apps/kubernetes-dashboard        1/1     1            1           35s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/dashboard-metrics-scraper-dc6947fbf   1         1         1       35s
replicaset.apps/kubernetes-dashboard-5d4dc8b976       1         1         1       35s

4.4.訪問 dashboard

從 1.7 開始,dashboard 只允許通過 https 訪問,如果使用 kube proxy 則必須監聽 localhost 或 127.0.0.1。對於 NodePort 沒有這個限制,但是僅建議在開發環境中使用。對於不滿足這些條件的登錄訪問,在登錄成功後瀏覽器不跳轉,始終停在登錄界面。

訪問方式:

Dashboard的github https://github.com/kubernetes/dashboard/blob/master/docs/user/accessing-dashboard/1.7.x-and-above.md

**kubectl proxy:**kubectl proxy在您的機器和Kubernetes API服務器之間創建代理服務器。默認情況下,只能在本地(從啓動它的計算機上)訪問它。

**kubectl port-forward:**通過端口轉發映射本地端口到指定的應用端口,從而訪問集羣中的應用程序(Pod)。

**NodePort:**這種方法只推薦使用在一個node節點的方案,在大多數的環境都需要多個node節點,因此這種方法沒有什麼實用價值,不建議使用。

**API Server:**如果Kubernetes API服務器是公開的並且可以從外部訪問,則可以直接在以下位置訪問儀表板: https://:/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

注意:僅當您選擇在瀏覽器中安裝用戶證書時,纔可以使用這種方式訪問Dashboard。在示例中,可以使用kubeconfig文件用於聯繫API服務器的證書。

Ingress

採用API Server訪問: https://10.0.0.61:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
在這裏插入圖片描述
我們使用的是system:anonymous用戶來訪問位於kube-system命名空間中的名爲https:kubernetes-dashboard的service資源。這個用戶沒有權限訪問,所以被拒絕了。

4.5.生成瀏覽器訪問證書

dashboard 默認只支持 token 認證(不支持 client 證書認證),所以如果使用 Kubeconfig 文件,需要將 token 寫入到該文件。

1.創建證書:

首先需要確認kubectl命令的配置文件,默認情況下爲/etc/kubernetes/admin.conf,而且已經自動創建在$HOME/.kube/config中,如果沒有創建則需要手動賦值。

[root@k8s-m01 ~]# cat $HOME/.kube/config

如果確認有集羣的配置,則運行以下命令來生成一個p12格式的瀏覽器證書

[root@k8s-m01 ~]# cd /etc/kubernetes/pki
[root@k8s-m01 pki]# grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d > dashboard.crt
[root@k8s-m01 pki]# grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d > dashboard.key

生成p12證書,按要求輸入密碼直接回車即可,密碼不要胡亂輸,後面給瀏覽器導入的時候要用。

[root@k8s-m01 pki]# openssl pkcs12 -export -clcerts -inkey dashboard.key -in dashboard.crt -out dashboard.p12 -name "kubernetes-client"
Enter Export Password:123456
Verifying - Enter Export Password:123456

2.導入證書

將生成的dashboard.p12證書下載到電腦;然後導入瀏覽器
在這裏插入圖片描述在這裏插入圖片描述在這裏插入圖片描述
在這裏插入圖片描述

4.6登錄dashboard

創建登錄 dashboard 的 token 和 kubeconfig 配置文件

方式一:創建登錄 token

[root@k8s-m01 pki]# kubectl create sa dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@k8s-m01 pki]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@k8s-m01 pki]# ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')
[root@k8s-m01 pki]# DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')
[root@k8s-m01 pki]# echo ${DASHBOARD_LOGIN_TOKEN}
eyJhbGciOiJSUzI1NiIsImtpZCI6IlZDQWQ1WWI0UVZzS1FYUmVMZ0FrY0ctZlZ1N0FiSXU3MVBBb0QzM2NaTEEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcTc1NzYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMmFjNmI5YTItYmMxMi00YzkxLTgyNDMtNzE3ZjlmMWNmZTlhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.q_WPyLn9wUVG9r5zleHMy5cN6XXzo-_eKNj5y-h0vwmOkUwr9u7KCQ27RPasmeu7-zx5RViwghchGnOqJnV87M1IOlZ7Ll_-Mu3SjXhvFjcAmcKpcJWNoPq4Dm3Q23SZIIOKwO8DAZLm32M9FCff6fTPe0z1NqwOVPSy92B4C16phWAHUcHEq4NzaM53C7y5KqcTSAzjA2N7rCPbKP6Y8793GwTsaLSiYzunrYKV87kiFlwnCImgB2_OxxHh3Xu0xpGmOmIXNSJPF7ZabZQ8BIMXJJ7NrsxuqQyr_qJemVpBBAg-4iph6DcaiUkoUI8L18RtJEuPPtHaLDtZdvdT8g

則可以使用上面輸出的token 登錄 Dashboard。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章