k8s簡介及集羣的搭建部署

Kubernetes簡介

中文文檔: http://docs.kubernetes.org.cn/

Kubernetes一個用於容器集羣的自動化部署、擴容以及運維的開源平臺。通過Kubernetes,你可以快速有效地響應用戶需求;快速而有預期地部署你的應用;極速地擴展你的應用;無縫對接新應用功能;節省資源,優化硬件資源的使用。爲容器編排管理提供了完整的開源方案。

  • 在Docker 作爲高級容器引擎快速發展的同時,在Google內部,容器技術已經應
    用了很多年,Borg系統運行管理着成千上萬的容器應用。
  • Kubernetes項目來源於Borg,可以說是集結了Borg設計思想的精華,並且吸收
    了Borg系統中的經驗和教訓。
  • Kubernetes對計算資源進行了更高層次的抽象,通過將容器進行細緻的組合,
    將最終的應用服務交給用戶。
  • Kubernetes的好處:
    • 隱藏資源管理和錯誤處理,用戶僅需要關注應用的開發。
    • 服務高可用、高可靠。
    • 可將負載運行在由成千上萬的機器聯合而成的集羣中。

k8s應用與容器化的部署和二進制的部署,容器的應用比較多。

kubernetes設計架構

在這裏插入圖片描述
• Kubernetes集羣包含有節點代理kubelet和Master組件(APIs, scheduler, etcd,),一切都基於分佈式的存儲系統。

  • Kubernetes主要由以下幾個核心組件組成:
組件 功能
• etcd: 保存了整個集羣的狀態
• apiserver: 提供了資源操作的唯一入口,並提供認證、授權、訪問控制、API註冊和發現等機制
• controller manager: 負責維護集羣的狀態,比如故障檢測、自動擴展、滾動更新等
• scheduler: 負責資源的調度,按照預定的調度策略將Pod調度到相應的機器上
• kubelet: 負責維護容器的生命週期,同時也負責Volume(CVI)和網絡(CNI)的管理
• Container runtime: 負責鏡像管理以及Pod和容器的真正運行(CRI)
• kube-proxy: 負責爲Service提供cluster內部的服務發現和負載均衡

• 除了核心組件,還有一些推薦的Add-ons:

kube-dns: 負責爲整個集羣提供DNS服務 #最新的k8s版本已經集成進去了
Ingress Controller: 爲服務提供外網入口
Heapster: 提供資源監控
Dashboard: 提供GUI
Federation: 提供跨可用區的集羣
Fluentd-elasticsearch: 提供集羣日誌採集、存儲與查詢
  • Kubernetes設計理念和功能其實就是一個類似Linux的分層架構

在這裏插入圖片描述

核心層: Kubernetes最核心的功能,對外提供API構建高層的應用,對內提供插件式應用執行環境
應用層: 部署(無狀態應用、有狀態應用、批處理任務、集羣應用等)和路由(服務發現、DNS解析等)
管理層: 系統度量(如基礎設施、容器和網絡的度量),自動化(如自動擴展、動態Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)
接口層: kubectl命令行工具、客戶端SDK以及集羣聯邦
生態系統: 在接口層之上的龐大容器集羣管理調度的生態系統,可以劃分爲兩個範疇
  • 生態系統 :
    • Kubernetes外部:日誌、監控、配置管理、CI、CD、Workflow、FaaS、
      OTS應用、ChatOps等
    • Kubernetes內部:CRI、CNI、CVI、鏡像倉庫、Cloud Provider、集羣自身
      的配置和管理等

Kubernetes部署

參考:https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

這裏我們需要用到harbor倉庫,因爲從本地的倉庫拉取,比從網上拉取速度快:

環境:
server1: 172.25.254.1 harbor倉庫
server2: 172.25.254.2 master結點
server3: 172.25.254.3 node結點
server4: 172.25.254.4 node結點

在server2,3,4主機:
關閉節點的selinux和iptables防火牆

  • 所有節點部署docker引擎:從阿里雲上部署安裝。
# step 1: 安裝必要的一些系統工具
yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加軟件源信息
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3: 更新並安裝Docker-CE
yum -y install docker-ce		# 需要container-selinux 的依賴性
[root@server1 yum.repos.d]# cat /etc/sysctl.d/bridge.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1		# 內核支持。
[root@server1 yum.repos.d]# scp /etc/sysctl.d/bridge.conf server2:/etc/sysctl.d/
 
[root@server1 yum.repos.d]# scp /etc/sysctl.d/bridge.conf server3:/etc/sysctl.d/

[root@server1 yum.repos.d]# scp /etc/sysctl.d/bridge.conf server4:/etc/sysctl.d/
# 讓這兩個參數生效
[root@server2 ~]# sysctl --system
[root@server3 ~]# sysctl --system
[root@server4 ~]# sysctl --system

 systemctl enable --now docker		# 打開三個結點的docker服務
  • 更改docker 和k8s使用同樣的控制方式:
[root@server2 ~]# docker info
 Cgroup Driver: cgroupfs		# docker原本使用的是cgroup進行控制的,我們要改稱systemd的方式控制
 
[root@server2 packages]# vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}

[root@server2 packages]# scp /etc/docker/daemon.json server3:/etc/docker/
root@server3's password: 
daemon.json                                                                                                                                                                      100%  201   238.1KB/s   00:00    
[root@server2 packages]# scp /etc/docker/daemon.json server4:/etc/docker/
root@server4's password: 
daemon.json          

[root@server2 packages]# systemctl restart docker
 Cgroup Driver: systemd		# 變成了systemd的方式
  • 禁用swap分區:
#禁用swap分區,使性能更好
[root@server3 ~]# swapoff -a			#server 2 3 4 都做
[root@server3 ~]# vim /etc/fstab 
[root@server3 ~]# vim /etc/fstab 
[root@server3 ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Tue Apr 28 02:35:30 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root   /                       xfs     defaults        0 0
UUID=004d1dd6-221a-4763-a5eb-c75e18655041 /boot                   xfs     defaults        0 0
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0

  • 安裝部署軟件kubeadm:
    我們從阿里雲上進行下載:
[root@server2 yum.repos.d]# vim k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

[root@server2 yum.repos.d]#yum install -y kubelet kubeadm kubectl		# kubectl 只需要在master結點安裝就可以

其它兩個結點作一樣的操作。

[root@server2 yum.repos.d]# systemctl enable --now kubelet.service 	#查看默認配置信息
imageRepository: k8s.gcr.io		
# 默認從k8s.gcr.io上下載組件鏡像,需要翻牆纔可以,所以需要修改鏡像倉庫:

[root@server2 yum.repos.d]# kubeadm config images list 		# 列出所需要的鏡像
W0618 15:03:59.486677   14931 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.4
k8s.gcr.io/kube-controller-manager:v1.18.4
k8s.gcr.io/kube-scheduler:v1.18.4
k8s.gcr.io/kube-proxy:v1.18.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

# 指定阿里雲倉庫列出
[root@server2 yum.repos.d]# kubeadm config images list --image-repository registry.aliyuncs.com/google_containers
W0618 15:04:21.098999   14946 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.4
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.4
registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.4
registry.aliyuncs.com/google_containers/kube-proxy:v1.18.4
registry.aliyuncs.com/google_containers/pause:3.2
registry.aliyuncs.com/google_containers/etcd:3.4.3-0
registry.aliyuncs.com/google_containers/coredns:1.6.7

[root@server2 yum.repos.d]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=1.18.3
# 拉取鏡像

[root@server2 yum.repos.d]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.18.3             3439b7546f29        4 weeks ago         117MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.18.3             7e28efa976bd        4 weeks ago         173MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.18.3             da26705ccb4b        4 weeks ago         162MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.18.3             76216c34ed0c        4 weeks ago         95.3MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        4 months ago        683kB
registry.aliyuncs.com/google_containers/coredns                   1.6.7               67da37a9a360        4 months ago        43.8MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        7 months ago        288MB

然後我們把這些鏡像放到harbor倉庫中去,方便我們其它的結點拉取。

[root@server1 yum.repos.d]# scp -r /etc/docker/certs.d/ server2:/etc/docker/
root@server2's password: 		# 把harbor的證書給server2
ca.crt  

[root@server2 yum.repos.d]# vim /etc/hosts
[root@server2 yum.repos.d]# cat /etc/hosts
172.25.254.1	server1	reg.caoaoyuan.org			# 給harbor倉庫做解析。
[root@server2 yum.repos.d]# docker login reg.caoaoyuan.org
Username: admin
Password: 
Login Succeeded		# 登陸進去

[root@server2 ~]# docker images |grep reg.ca | awk '{print $1":"$2}'
reg.caoaoyuan.org/library/kube-proxy:v1.18.3		#給上面的鏡像打標籤成這樣子。
reg.caoaoyuan.org/library/kube-apiserver:v1.18.3
reg.caoaoyuan.org/library/kube-controller-manager:v1.18.3
reg.caoaoyuan.org/library/kube-scheduler:v1.18.3
reg.caoaoyuan.org/library/pause:3.2
reg.caoaoyuan.org/library/coredns:1.6.7
reg.caoaoyuan.org/library/etcd:3.4.3-0


# 上傳到harbor倉庫
[root@server2 ~]# for i in `docker images |grep reg.ca | awk '{print $1":"$2}'`;do dicker push $i ;done		
# 刪除阿里雲的鏡像
[root@server2 ~]# for i in `docker images |grep regis | awk '{print $1":"$2}'`;do docker rmi $i ;done

在這裏插入圖片描述
上傳成功了。其它的結點就可以拉取了,注意先把證書放過去,還有本地解析:

[root@server1 harbor]# scp -r /etc/docker/certs.d/ server3:/etc/docker/
root@server3's password: 
ca.crt                                                                                                                                                                           100% 2114    39.7KB/s   00:00    
[root@server1 harbor]# scp -r /etc/docker/certs.d/ server4:/etc/docker/
root@server4's password: 
ca.crt  		# 這兩個結點還沒有放

在master結點執行集羣初始化:


[root@server2 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository reg.caoaoyuan.org/library/ 
Your Kubernetes control-plane has initialized successfully!
--kubernetes-version=1.18.3
kubeadm join 172.25.254.2:6443 --token 61xkmb.qd1alzh6winolaeg \
    --discovery-token-ca-cert-hash sha256:ef9f8d0f0866660e7a01c54ecfc65abbbb11f25147ec7da75453098a9302e597
生成了token(用於加入集羣)和hash碼(用於驗證master端)token默認保存24H
[kubeadm@server2 ~]$ kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
61xkmb.qd1alzh6winolaeg   23h         2020-06-19T17:31:47+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

# 過期後可以使用 kubeadm token create	生成。

官方建議我們使用普通用戶操作集羣,我們只需要:

[root@server2 ~]# useradd kubeadm
[root@server2 ~]# visudo 	# 給kubeadm下放權限
su[root@server2 ~]# su - kubeadm 
[kubeadm@server2 ~]$ mkdir -p $HOME/.kube
[kubeadm@server2 ~]$ sudo  cp -i /etc/kubernetes/admin.conf $HOME/.kube/config		#裏面其實是證書。
#把認證放過來,就可以操縱集羣了
[kubeadm@server2 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[kubeadm@server2 ~]$ kubectl get node
NAME      STATUS     ROLES    AGE   VERSION	
server2   NotReady   master   12m   v1.18.3	#當前只有master結點,且爲notready狀態

節點擴容,將server3和server4加入到server2:

sysctl -w net.ipv4.ip_forward=1		#可能需要執行這個命令纔可以進行
[root@server4 ~]# kubeadm join 172.25.254.2:6443 --token 61xkmb.qd1alzh6winolaeg     --discovery-token-ca-cert-hash sha256:ef9f8d0f0866660e7a01c54ecfc65abbbb11f25147ec7da75453098a9302e597
[root@server3 ~]# kubeadm join 172.25.254.2:6443 --token 61xkmb.qd1alzh6winolaeg     --discovery-token-ca-cert-hash sha256:ef9f8d0f0866660e7a01c54ecfc65abbbb11f25147ec7da75453098a9302e597

[kubeadm@server2 ~]$ kubectl get nodes
NAME      STATUS     ROLES    AGE   VERSION
server2   NotReady   master   20m   v1.18.3
server3   NotReady   <none>   88s   v1.18.3
server4   NotReady   <none>   31s   v1.18.3
# 就有兩個結點加進來了。


  • 安裝flannel網絡組件:
[root@server2 demo]# docker images
quay.io/coreos/flannel                              v0.12.0-amd64       4e9f801d2217        3 months ago        52.8MB
# 導入這個網絡組件,在3和4兩個結點都導入

# 切換至kubeadm用戶應用這個文件。
[kubeadm@server2 ~]$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

[kubeadm@server2 ~]$ kubectl get pod -n kube-system		#,系統組件它的pod都是以namespace的方式進行隔離的
NAME                              READY   STATUS    RESTARTS   AGE
coredns-5fd54d7f56-22fwz          1/1     Running   0          123m
coredns-5fd54d7f56-l9z5k          1/1     Running   0          123m
etcd-server2                      1/1     Running   3          124m
kube-apiserver-server2            1/1     Running   2          124m
kube-controller-manager-server2   1/1     Running   3          124m
kube-flannel-ds-amd64-6t4tp       1/1     Running   0          9m31s
kube-flannel-ds-amd64-gk9r2       1/1     Running   0          9m31s		# 網絡組件
kube-flannel-ds-amd64-mlcvm       1/1     Running   0          9m31s
kube-proxy-f7rnh                  1/1     Running   0          104m
kube-proxy-hww5t                  1/1     Running   1          104m
kube-proxy-wn4h8                  1/1     Running   3          123m
kube-scheduler-server2            1/1     Running   3          124m
# 都是running才ok

[kubeadm@server2 ~]$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
server2   Ready    master   125m   v1.18.3
server3   Ready    <none>   106m   v1.18.3
server4   Ready    <none>   105m   v1.18.3		#ready
# 我們就可以用這個集羣了
  • 查看命名空間
[kubeadm@server2 ~]$ kubectl get pod --all-namespaces	#查看所有的命名空間
[kubeadm@server2 ~]$ kubectl get pod -o wide -n kube-system	# -o wide 查看詳細
NAME                              READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
coredns-5fd54d7f56-22fwz          1/1     Running   0          3h34m   10.244.2.2     server4   <none>           <none>
coredns-5fd54d7f56-l9z5k          1/1     Running   0          3h34m   10.244.1.2     server3   <none>           <none>
etcd-server2                      1/1     Running   3          3h34m   172.25.254.2   server2   <none>           <none>
kube-apiserver-server2            1/1     Running   2          3h34m   172.25.254.2   server2   <none>           <none>
kube-controller-manager-server2   1/1     Running   3          3h34m   172.25.254.2   server2   <none>           <none>
kube-flannel-ds-amd64-6t4tp       1/1     Running   0          100m    172.25.254.3   server3   <none>           <none>
kube-flannel-ds-amd64-gk9r2       1/1     Running   0          100m    172.25.254.2   server2   <none>           <none>
kube-flannel-ds-amd64-mlcvm       1/1     Running   0          100m    172.25.254.4   server4   <none>           <none>
kube-proxy-f7rnh                  1/1     Running   0          3h14m   172.25.254.4   server4   <none>           <none>
kube-proxy-hww5t                  1/1     Running   1          3h15m   172.25.254.3   server3   <none>           <none>
kube-proxy-wn4h8                  1/1     Running   3          3h34m   172.25.254.2   server2   <none>           <none>
kube-scheduler-server2            1/1     Running   3          3h34m   172.25.254.2   server2   <none>           <none>
可以看到這些組件運行的位置。flannel組件用dameset的控制器,它的特點就是每個節點運行一個。
proxy在每個結點也有。

[root@server4 ~]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
reg.caoaoyuan.org/library/kube-proxy   v1.18.3             3439b7546f29        4 weeks ago         117MB
quay.io/coreos/flannel                 v0.12.0-amd64       4e9f801d2217        3 months ago        52.8MB
reg.caoaoyuan.org/library/pause        3.2                 80d28bedfe5d        4 months ago        683kB
reg.caoaoyuan.org/library/coredns      1.6.7               67da37a9a360        4 months ago        43.8MB
server3 和server4 加入集羣也獲取了harbor倉庫的信息,拉取了這幾個鏡像,kubernete就可以運行了,所有的服務都是以容器的方式運行的。
  • 自動補齊

[kubeadm@server2 ~]$  echo "source <(kubectl completion bash)" >> ~/.bashrc
[kubeadm@server2 ~]$ logout
[root@server2 demo]# su - kubeadm 
Last login: Thu Jun 18 19:26:19 CST 2020 on pts/0
[kubeadm@server2 ~]$ kubectl 
alpha          apply          certificate    convert   		#就可以自動補齊了。
  • 刪除結點
[kubeadm@server2 ~]$ kubectl drain server4 --delete-local-data --force --ignore-daemonsets
kunode/server4 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-mlcvm, kube-system/kube-proxy-f7rnh
evicting pod kube-system/coredns-5fd54d7f56-22fwz
bec	pod/coredns-5fd54d7f56-22fwz evicted
node/server4 evicted
[kubeadm@server2 ~]$ kubectl get nodes
NAME      STATUS                     ROLES    AGE     VERSION
server2   Ready                      master   3h56m   v1.18.3
server3   Ready                      <none>   3h37m   v1.18.3
server4   Ready,SchedulingDisabled   <none>   3h36m   v1.18.3		# 不再調用此結點
[kubeadm@server2 ~]$ kubectl get node
NAME      STATUS                     ROLES    AGE     VERSION
server2   Ready                      master   3h56m   v1.18.3
server3   Ready                      <none>   3h37m   v1.18.3
server4   Ready,SchedulingDisabled   <none>   3h36m   v1.18.3	
[kubeadm@server2 ~]$ kubectl delete node server4		# 刪除結點
node "server4" deleted
[kubeadm@server2 ~]$ kubectl get node
NAME      STATUS   ROLES    AGE     VERSION
server2   Ready    master   3h57m   v1.18.3
server3   Ready    <none>   3h38m   v1.18.3
[kubeadm@server2 ~]$

這種只適用於已經正常加入的結點,對於還沒有正常加入的結點,直接在那個結點上執行:

[root@server4 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks

把剛纔join的信息清除掉

如果想再加入進來的話:
[kubeadm@server2 ~]$ kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
61xkmb.qd1alzh6winolaeg   19h `還沒過期`  2020-06-19T17:31:47+08:00 
[root@server4 ~]# kubeadm join 172.25.254.2:6443 --token 61xkmb.qd1alzh6winolaeg     --discovery-token-ca-cert-hash sha256:ef9f8d0f0866660e7a01c54ecfc65abbbb11f25147ec7da75453098a9302e597
再加入進去就行了,前提是要做好上面結點中的所有配置。

[kubeadm@server2 ~]$ kubectl get node
NAME      STATUS   ROLES    AGE     VERSION
server2   Ready    master   4h3m    v1.18.3
server3   Ready    <none>   3h43m   v1.18.3
server4   Ready    <none>   2m1s    v1.18.3
  • 刪除flannel網絡組件
[root@server4 ~]# docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS              PORTS               NAMES
56862b391eda        4e9f801d2217                          "/opt/bin/flanneld -…"   33 minutes ago      Up 33 minutes                           k8s_kube-flannel_kube-flannel-ds-amd64-sklll_kube-system_84e2eb08-2b85-4cc2-a167-5ea78629af3c_1
[root@server4 ~]# docker rm -f 56862b391eda
56862b391eda
[root@server4 ~]# docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS                  PORTS               NAMES
f7db2b985cc5        4e9f801d2217                          "/opt/bin/flanneld -…" 
集羣會事時監控狀態,當有服務關閉時,會自動一直restart服務。
  • 創建一個pod
[kubeadm@server2 ~]$ kubectl run demo --image=nginx		# 自主式pod,還有一種deployment控制器方式
kubecpod/demo created
[kubeadm@server2 ~]$ kubectl  get pod
NAME   READY   STATUS              RESTARTS   AGE
demo   0/1     ContainerCreating   0          5s
[kubeadm@server2 ~]$ kubectl logs demo 
Error from server (BadRequest): container "demo" in pod "demo" is waiting to start: ContainerCreating
[kubeadm@server2 ~]$ kubectl describe pod demo 		#查看pod 詳細信息
Name:         demo
Namespace:    default
Priority:     0
Node:         server3/172.25.254.3
IP:           10.244.1.3
IPs:
  IP:  10.244.1.3
  Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  <unknown>  default-scheduler  Successfully assigned default/demo to server3
  Normal  Pulling    47s        kubelet, server3   Pulling image "nginx"
  Normal  Pulled     19s        kubelet, server3   Successfully pulled image "nginx"
  Normal  Created    19s        kubelet, server3   Created container demo
  Normal  Started    18s        kubelet, server3   Started container demo
  [kubeadm@server2 ~]$ kubectl logs demo	#查看pod 日誌
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
[kubeadm@server2 ~]$ kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP           NODE      NOMINATED NODE   READINESS GATES
demo   1/1     Running   0          8m58s   10.244.1.3   server3   <none>           <none>
# 已經運行在了server3上下次再部署相同的容器時,依然會到server3上,因爲他已經拉取過鏡像了
[kubeadm@server2 ~]$ curl 10.244.1.4
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
[kubeadm@server2 ~]$ kubectl delete pod demo 
pod "demo" deleted

然後我們去server3和server4兩個工作結點配置倉庫:

[root@server3 ~]# vim /etc/docker/daemon.json
[root@server4 ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://reg.caoaoyuan.org"],		# 把這一行加進去。
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
[root@server3 ~]# systemctl restart docker

這時拉取鏡像就會從我們的harbor倉庫去拉取了。我們集羣的配置就基本ok了。

  • kubectl命令指南:
    • https://kubernetes.io/docs/reference/generated/kubectl/kubectl-
      commands
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章