Kubernetes實錄(6) 使用kubeadm配置升級kubernetes集羣

Kubernetes從1.13.0 升級到 1.15.1版本

備註:沒有整理,只是將之前的筆記貼這裏,防止筆記丟失。

注意點:

  1. Kubernetes申請不能跨越minor版本,也就是1.13.x 要升級到1.15.x 需要先升級到中間版本1.14.x
  2. 先升級其中一個master,然後升級其他master,最後逐一升級worker節點(先踢出去,pod轉移)
  3. 網絡插件calica單獨升級
  • 當前版本1.13.0,鏡像信息
    k8s.gcr.io/kube-apiserver:v1.13.0
    k8s.gcr.io/kube-controller-manager:v1.13.0
    k8s.gcr.io/kube-scheduler:v1.13.0
    k8s.gcr.io/kube-proxy:v1.13.0
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.2.24
    k8s.gcr.io/coredns:1.2.6

  • 中間版本選擇1.14.4,鏡像信息
    k8s.gcr.io/kube-apiserver:v1.14.4
    k8s.gcr.io/kube-controller-manager:v1.14.4
    k8s.gcr.io/kube-scheduler:v1.14.4
    k8s.gcr.io/kube-proxy:v1.14.4
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.3.10
    k8s.gcr.io/coredns:1.3.1

  • 目標版本 1.15.1,鏡像信息
    k8s.gcr.io/kube-apiserver:v1.15.1
    k8s.gcr.io/kube-controller-manager:v1.15.1
    k8s.gcr.io/kube-scheduler:v1.15.1
    k8s.gcr.io/kube-proxy:v1.15.1
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.3.10
    k8s.gcr.io/coredns:1.3.1

#第一步,將需要的鏡像都拉取下來

版本 1.14.4
docker pull mirrorgooglecontainers/kube-apiserver:v1.14.4
docker pull mirrorgooglecontainers/kube-controller-manager:v1.14.4
docker pull mirrorgooglecontainers/kube-scheduler:v1.14.4
docker pull mirrorgooglecontainers/kube-proxy:v1.14.4
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull coredns/coredns:1.3.1

docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.14.4 k8s.gcr.io/kube-proxy:v1.14.4
docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.14.4 k8s.gcr.io/kube-scheduler:v1.14.4
docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.14.4 k8s.gcr.io/kube-apiserver:v1.14.4
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.14.4 k8s.gcr.io/kube-controller-manager:v1.14.4
docker tag docker.io/mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1
docker tag docker.io/mirrorgooglecontainers/etcd:3.3.10  k8s.gcr.io/etcd:3.3.10
docker tag docker.io/coredns/coredns:1.3.1  k8s.gcr.io/coredns:1.3.1

docker rmi docker.io/mirrorgooglecontainers/kube-proxy:v1.14.4
docker rmi docker.io/mirrorgooglecontainers/kube-scheduler:v1.14.4
docker rmi docker.io/mirrorgooglecontainers/kube-apiserver:v1.14.4
docker rmi docker.io/mirrorgooglecontainers/kube-controller-manager:v1.14.4
docker rmi docker.io/mirrorgooglecontainers/etcd:3.3.10
docker rmi docker.io/mirrorgooglecontainers/pause:3.1
docker rmi docker.io/coredns/coredns:1.3.1

升級前

kubectl get nodes
NAME                 STATUS   ROLES    AGE    VERSION
ejucsmaster-shqs-1   Ready    master   227d   v1.13.0
ejucsmaster-shqs-2   Ready    master   227d   v1.13.0
ejucsmaster-shqs-3   Ready    master   227d   v1.13.0
ejucsnode-shqs-1     Ready    node     227d   v1.13.0
ejucsnode-shqs-2     Ready    node     227d   v1.13.0
ejucsnode-shqs-3     Ready    node     227d   v1.13.0

在第一個master節點上,升級kubeadm

yum install kubeadm-1.15.1 -y
kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.13.0
[upgrade/versions] kubeadm version: v1.14.4
I0729 03:01:12.095024  152203 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://dl.k8s.io/release/stable.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0729 03:01:12.095073  152203 version.go:97] falling back to the local client version: v1.14.4
[upgrade/versions] Latest stable version: v1.14.4
I0729 03:01:22.185670  152203 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.13.txt": Get https://dl.k8s.io/release/stable-1.13.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0729 03:01:22.185712  152203 version.go:97] falling back to the local client version: v1.14.4
[upgrade/versions] Latest version in the v1.13 series: v1.14.4

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     6 x v1.13.0   v1.14.4

Upgrade to the latest version in the v1.13 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.13.0   v1.14.4
Controller Manager   v1.13.0   v1.14.4
Scheduler            v1.13.0   v1.14.4
Kube Proxy           v1.13.0   v1.14.4
CoreDNS              1.2.6     1.3.1
Etcd                 3.2.24    3.3.10

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.14.4

_____________________________________________________________________



kubeadm upgrade apply v1.14.4
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.14.4"
[upgrade/versions] Cluster version: v1.13.0
[upgrade/versions] kubeadm version: v1.14.4
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.14.4"...
Static pod: kube-apiserver-ejucsmaster-shqs-1 hash: e5cb57672d15eaf8de6a0a955e2f2022
Static pod: kube-controller-manager-ejucsmaster-shqs-1 hash: e0d9ac0e209e5ce185290b6e18e622a6
Static pod: kube-scheduler-ejucsmaster-shqs-1 hash: 1ce604a16f88f1f8fcc11221d3c36c2f
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-ejucsmaster-shqs-1 hash: 1b013643552685193fc57466fea895e1
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-29-03-01-56/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-ejucsmaster-shqs-1 hash: 1b013643552685193fc57466fea895e1
Static pod: etcd-ejucsmaster-shqs-1 hash: 61acb15cc7b9af69753491e5546dc750
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests564340744"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-29-03-01-56/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-ejucsmaster-shqs-1 hash: e5cb57672d15eaf8de6a0a955e2f2022
Static pod: kube-apiserver-ejucsmaster-shqs-1 hash: 5f95cf91b87082f2ddde7f26105a285f
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-29-03-01-56/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-ejucsmaster-shqs-1 hash: e0d9ac0e209e5ce185290b6e18e622a6
Static pod: kube-controller-manager-ejucsmaster-shqs-1 hash: f622c6dc2de7e2adb6526273f062f981
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-29-03-01-56/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-ejucsmaster-shqs-1 hash: 1ce604a16f88f1f8fcc11221d3c36c2f
Static pod: kube-scheduler-ejucsmaster-shqs-1 hash: 14f017c4505fff9ede17e11ba591dcfc
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ejucsmaster-shqs-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local csapi.ejuops.com] and IPs [10.96.0.1 10.99.12.201]
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.4". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.




yum install kubectl-1.14.4 kubelet-1.14.4 -y
systemctl daemon-reload
systemctl restart kubelet
systemctl status kubelet

#other master

yum install kubectl-1.14.4 kubelet-1.14.4 -y
yum install kubeadm-1.14.4 -y 



kubeadm upgrade node experimental-control-plane
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ejucsmaster-shqs-2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local csapi.ejuops.com] and IPs [10.96.0.1 10.99.12.202]
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.14.4"...
Static pod: kube-apiserver-ejucsmaster-shqs-2 hash: fe437e2cb6b9d1a5d0cdea16e06da2d6
Static pod: kube-controller-manager-ejucsmaster-shqs-2 hash: 7d4760bca175ec80f636fd9bba86a533
Static pod: kube-scheduler-ejucsmaster-shqs-2 hash: 1ce604a16f88f1f8fcc11221d3c36c2f
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-ejucsmaster-shqs-2 hash: ee1a89c2fb71d98bb199887b236286be
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-29-03-07-12/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-ejucsmaster-shqs-2 hash: ee1a89c2fb71d98bb199887b236286be
Static pod: etcd-ejucsmaster-shqs-2 hash: 02f0ef52e3016eb2f10ba19a7f54bab0
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests537746009"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-29-03-07-12/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-ejucsmaster-shqs-2 hash: fe437e2cb6b9d1a5d0cdea16e06da2d6
Static pod: kube-apiserver-ejucsmaster-shqs-2 hash: 0f36a6414060a66d484d9cb77cc1f95d
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-29-03-07-12/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-ejucsmaster-shqs-2 hash: 7d4760bca175ec80f636fd9bba86a533
Static pod: kube-controller-manager-ejucsmaster-shqs-2 hash: f622c6dc2de7e2adb6526273f062f981
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-29-03-07-12/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-ejucsmaster-shqs-2 hash: 1ce604a16f88f1f8fcc11221d3c36c2f
Static pod: kube-scheduler-ejucsmaster-shqs-2 hash: 14f017c4505fff9ede17e11ba591dcfc
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade] The control plane instance for this node was successfully updated!



yum install kubectl-1.14.4 kubelet-1.14.4 -y
systemctl daemon-reload
systemctl restart kubelet
systemctl status kubelet

#worker節點

yum versionlock kubelet-1.14.4

yum install kubectl-1.14.4 kubelet-1.14.4 -y
yum install kubeadm-1.14.4
=========================================================================================================================================================================================================
 Package                                              Arch                                         Version                                        Repository                                        Size
=========================================================================================================================================================================================================
Updating:
 kubeadm                                              x86_64                                       1.14.4-0                                       kubernetes                                       8.7 M
Updating for dependencies:
 cri-tools                                            x86_64                                       1.13.0-0                                       kubernetes                                       5.1 M
 kubelet                                              x86_64                                       1.14.4-0                                       kubernetes                                        23 M
 kubernetes-cni                                       x86_64                                       0.7.5-0                                        kubernetes                                        10 M



[master執行]
kubectl drain $NODE --ignore-daemonsets

kubectl drain ejucsnode-shqs-1 --ignore-daemonsets --delete-local-data
kubeadm upgrade node config --kubelet-version v1.14.4
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

systemctl daemon-reload
systemctl restart kubelet
systemctl status kubelet
kubectl uncordon ejucsnode-shqs-1


kubectl get nodes
NAME                 STATUS   ROLES    AGE    VERSION
ejucsmaster-shqs-1   Ready    master   227d   v1.14.4
ejucsmaster-shqs-2   Ready    master   227d   v1.14.4
ejucsmaster-shqs-3   Ready    master   227d   v1.14.4
ejucsnode-shqs-1     Ready    node     227d   v1.14.4
ejucsnode-shqs-2     Ready    node     227d   v1.14.4
ejucsnode-shqs-3     Ready    node     227d   v1.14.4

升級到1.15.1

  • 目標版本 1.15.1,鏡像信息
    k8s.gcr.io/kube-apiserver:v1.15.1
    k8s.gcr.io/kube-controller-manager:v1.15.1
    k8s.gcr.io/kube-scheduler:v1.15.1
    k8s.gcr.io/kube-proxy:v1.15.1
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.3.10
    k8s.gcr.io/coredns:1.3.1

#將需要的鏡像都拉取下來

#版本: v1.15.1
docker pull mirrorgooglecontainers/kube-apiserver:v1.15.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.15.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.15.1
docker pull mirrorgooglecontainers/kube-proxy:v1.15.1

docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1
docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1
docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1

docker rmi docker.io/mirrorgooglecontainers/kube-proxy:v1.15.1
docker rmi docker.io/mirrorgooglecontainers/kube-scheduler:v1.15.1
docker rmi docker.io/mirrorgooglecontainers/kube-apiserver:v1.15.1
docker rmi docker.io/mirrorgooglecontainers/kube-controller-manager:v1.15.1

yum install kubeadm-1.15.1 -y
kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.14.4
[upgrade/versions] kubeadm version: v1.15.1
W0729 03:44:41.516770   51142 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://dl.k8s.io/release/stable.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0729 03:44:41.516820   51142 version.go:99] falling back to the local client version: v1.15.1
[upgrade/versions] Latest stable version: v1.15.1
W0729 03:44:51.605778   51142 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.14.txt": Get https://dl.k8s.io/release/stable-1.14.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0729 03:44:51.605824   51142 version.go:99] falling back to the local client version: v1.15.1
[upgrade/versions] Latest version in the v1.14 series: v1.15.1

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     6 x v1.14.4   v1.15.1

Upgrade to the latest version in the v1.14 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.14.4   v1.15.1
Controller Manager   v1.14.4   v1.15.1
Scheduler            v1.14.4   v1.15.1
Kube Proxy           v1.14.4   v1.15.1
CoreDNS              1.3.1     1.3.1
Etcd                 3.3.10    3.3.10

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.15.1

_____________________________________________________________________

kubeadm upgrade apply v1.15.1
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.15.1"
[upgrade/versions] Cluster version: v1.14.4
[upgrade/versions] kubeadm version: v1.15.1
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.15.1"...
Static pod: kube-apiserver-ejucsmaster-shqs-1 hash: 47b5cee02ba98d7f9831e7ae36c899a8
Static pod: kube-controller-manager-ejucsmaster-shqs-1 hash: 9145d9bd1e89a4760a642dde585d87d9
Static pod: kube-scheduler-ejucsmaster-shqs-1 hash: f68190a0f739ccb1673b9255e9679ab8
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests895017203"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-29-03-45-21/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-ejucsmaster-shqs-1 hash: 47b5cee02ba98d7f9831e7ae36c899a8
Static pod: kube-apiserver-ejucsmaster-shqs-1 hash: b64f3195805f4e11a36cb99f794ae5b7
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-29-03-45-21/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-ejucsmaster-shqs-1 hash: 9145d9bd1e89a4760a642dde585d87d9
Static pod: kube-controller-manager-ejucsmaster-shqs-1 hash: 3f5397f0c363a37aaa4587f1d542d2f4
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-29-03-45-21/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-ejucsmaster-shqs-1 hash: f68190a0f739ccb1673b9255e9679ab8
Static pod: kube-scheduler-ejucsmaster-shqs-1 hash: ecae9d12d3610192347be3d1aa5aa552
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.15.1". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.


yum install kubectl-1.15.1 kubelet-1.15.1 -y
systemctl daemon-reload
systemctl restart kubelet
systemctl status kubelet

#other master

yum install kubeadm-1.15.1 -y 

kubeadm upgrade node experimental-control-plane
Command "experimental-control-plane" is deprecated, this command is deprecated. Use "kubeadm upgrade node" instead
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.15.1"...
Static pod: kube-apiserver-ejucsmaster-shqs-2 hash: 116be44b0210637de8b11b47e02d53c2
Static pod: kube-controller-manager-ejucsmaster-shqs-2 hash: c8f6a3bc850c747a2a4cf83b2071f163
Static pod: kube-scheduler-ejucsmaster-shqs-2 hash: d892f5aa01870a63abd70166d7968a45
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests977827044"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-29-03-47-30/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-ejucsmaster-shqs-2 hash: 116be44b0210637de8b11b47e02d53c2
Static pod: kube-apiserver-ejucsmaster-shqs-2 hash: 8b346dd0372db5962109572204ca2567
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-29-03-47-30/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-ejucsmaster-shqs-2 hash: c8f6a3bc850c747a2a4cf83b2071f163
Static pod: kube-controller-manager-ejucsmaster-shqs-2 hash: 2a50df3972edeca064f54e3479e744be
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-07-29-03-47-30/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-ejucsmaster-shqs-2 hash: d892f5aa01870a63abd70166d7968a45
Static pod: kube-scheduler-ejucsmaster-shqs-2 hash: 18859150495c74ad1b9f283da804a3db
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade] The control plane instance for this node was successfully updated!


yum install kubectl-1.15.1 kubelet-1.15.1 -y

systemctl daemon-reload
systemctl restart kubelet
systemctl status kubelet

#worker節點

yum versionlock kubelet-1.15.1


yum install kubeadm-1.15.1 -y
=========================================================================================================================================================================================================
 Package                                              Arch                                         Version                                        Repository                                        Size
=========================================================================================================================================================================================================
Updating:
 kubeadm                                              x86_64                                       1.14.4-0                                       kubernetes                                       8.7 M
Updating for dependencies:
 cri-tools                                            x86_64                                       1.13.0-0                                       kubernetes                                       5.1 M
 kubelet                                              x86_64                                       1.14.4-0                                       kubernetes                                        23 M
 kubernetes-cni                                       x86_64                                       0.7.5-0                                        kubernetes                                        10 M



[master執行]
kubectl drain $NODE --ignore-daemonsets

kubectl drain ejucsnode-shqs-1 --ignore-daemonsets --delete-local-data
kubeadm upgrade node config --kubelet-version v1.15.1
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

yum install kubectl-1.15.1 kubelet-1.15.1 -y
systemctl daemon-reload
systemctl restart kubelet
systemctl status kubelet
kubectl uncordon ejucsnode-shqs-1


kubectl get nodes
NAME                 STATUS   ROLES    AGE    VERSION
ejucsmaster-shqs-1   Ready    master   227d   v1.15.1
ejucsmaster-shqs-2   Ready    master   227d   v1.15.1
ejucsmaster-shqs-3   Ready    master   227d   v1.15.1
ejucsnode-shqs-1     Ready    node     227d   v1.15.1
ejucsnode-shqs-2     Ready    node     227d   v1.15.1
ejucsnode-shqs-3     Ready    node     227d   v1.15.1

升級網絡組建

kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章