K8S部署

K8S部署

 

1、安裝環境準備

1)、安裝環境: centos7.3,一個master節點,兩個node節點

[root@master yum.repos.d]# cat /etc/redhat-release

CentOS Linux release 7.3.1611 (Core)

2)、設置/etc/hosts文件的IP和主機名映射

[root@master ~]# cat /etc/hosts

10.100.240.221 master

10.100.240.222 node01

10.100.240.223 node02

 

 

2、安裝

以下安裝步驟若未指明,則都爲在master節點的安裝

 

2.1、配置centos7、docker和k8s倉庫

a、配置centos 7 repo

cd /etc/yum.repo.d

wget http://mirrors.aliyun.com/repo/Centos-7.repo

b、配置docker倉庫

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

c、配置k8s倉庫

新建vi kubernetes.repo,編輯內容

[kubernetes]

name=Kubernetes Repo

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/   

gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

enabled=1

 

 

2.2、安裝docker、kubelet、kubeadm、kubectl

yum install -y docker-ce-18.06* kubelet-1.11.1 kubeadm-1.11.1 kubectl-1.11.1

 

注1:k8s這三個相關應用後面加入了-1.11.1,指明瞭安裝版本,若需要安裝最新版,則不需要加-1.11.1,但k8s版本更新較快,最新版可能不穩定且出問題後網上資料較少排查困難,因此不建議安裝最新版

 

注2:若安裝提示gpg key無法安裝,則需要手動wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg和wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

 

然後手動導入兩個密鑰文件rpm --import yum-key.gpg  rpm --import rpm-package-key.gpg

2.3、停止防火牆和selinux,每個節點執行

[root@master ~]# systemctl stop firewalld

[root@master ~]# systemctl disable firewalld

[root@master ~]# setenforce 0

[root@master ~]# sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux

2.4、docker相關配置

爲了使docker能正常訪問bridge-nf-call-iptables ,需要把如下兩個文件內容修改爲1

 

/proc/sys/net/bridge/bridge-nf-call-ip6tables

 

/proc/sys/net/bridge/bridge-nf-call-iptables

 

若這兩個文件原來的值都爲0,則參考如下修改

 在CentOS中

 vim /etc/sysctl.conf 

 net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-arptables = 1

 重啓。

 

啓動docker並設置docker自啓動

[root@master yum.repos.d]# systemctl start docker

[root@master yum.repos.d]# systemctl enable docker

 

2.5、kubelet相關安裝

設置kubelet

1)rpm -ql kubelet 查看kubelet安裝了哪些文件

/etc/kubernetes/manifests

/etc/sysconfig/kubelet

/etc/systemd/system/kubelet.service

/usr/bin/kubelet

2)修改配置文件:vi /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--fail-swap-on=false"

默認參數爲空,需要修改

3)設置kubelet開機自啓動:

systemctl enable kubelet

 

​​​​​​​2.6、使用kubeadm完成集羣初始化

在master節點初始化集羣

1)kubeadm init --help查看init相關命令幫助

2)vim /etc/sysconfig/kubelet修改配置選項,忽略swap

3)KUBELET_EXTRA_ARGS="--fail-swap-on=false"

4)  初始化

kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=10.100.240.221 --ignore-preflight-errors=Swap

5) 如拉取鏡像失敗,則按如下方式處理

images=(kube-proxy-amd64:v1.11.1 kube-scheduler-amd64:v1.11.1 kube-controller-manager-amd64:v1.11.1 kube-apiserver-amd64:v1.11.1

etcd-amd64:3.2.18 coredns:1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.9 k8s-dns-kube-dns-amd64:1.14.9

k8s-dns-dnsmasq-nanny-amd64:1.14.9 )

for imageName in ${images[@]} ; do

  docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName

  docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName k8s.gcr.io/$imageName

  #docker rmi registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName

done

docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1

或者

images=(kube-proxy-amd64:v1.11.1 kube-scheduler-amd64:v1.11.1 kube-controller-manager-amd64:v1.11.1 kube-apiserver-amd64:v1.11.1 etcd-amd64:3.2.18 pause-amd64:3.1)

for imageName in ${images[@]} ; do

  docker pull mirrorgooglecontainers/$imageName  

  docker tag mirrorgooglecontainers/$imageName k8s.gcr.io/$imageName  

  docker rmi mirrorgooglecontainers/$imageName

done

 

該命令爲從mirrorgooglecontainers倉庫拉取images裏指定的鏡像,並重命名tag,然後刪除原來的鏡像

由於鏡像倉庫國內無法訪問,因此一般都拉取失敗,此時就需要在本地先pull鏡像,然後用docker tag命令更改鏡像標籤

images=(kube-proxy-amd64:v1.11.1 kube-scheduler-amd64:v1.11.1 kube-controller-manager-amd64:v1.11.1 kube-apiserver-amd64:v1.11.1 etcd-amd64:3.2.18 pause-amd64:3.1)

for imageName in ${images[@]} ; do

  docker pull mirrorgooglecontainers/$imageName  

  docker tag mirrorgooglecontainers/$imageName k8s.gcr.io/$imageName  

  docker rmi mirrorgooglecontainers/$imageName

done

[root@master yum.repos.d]# docker images

REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE

k8s.gcr.io/kube-proxy-amd64                v1.11.1             d5c25579d0ff        13 months ago       97.8MB

k8s.gcr.io/kube-apiserver-amd64            v1.11.1             816332bd9d11        13 months ago       187MB

k8s.gcr.io/kube-scheduler-amd64            v1.11.1             272b3a60cd68        13 months ago       56.8MB

k8s.gcr.io/kube-controller-manager-amd64   v1.11.1             52096ee87d0e        13 months ago       155MB

k8s.gcr.io/etcd-amd64                      3.2.18              b8df3b177be2        16 months ago       219MB

k8s.gcr.io/pause-amd64                     3.1                 da86e6ba6ca1        20 months ago       742kB

 

[root@master yum.repos.d]# docker pull coredns/coredns:1.1.3

docker tag coredns/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3

[root@master yum.repos.d]# docker tag k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1

[root@master yum.repos.d]# docker rmi k8s.gcr.io/pause-amd64:3.1

 

[root@master yum.repos.d]# docker images

REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE

k8s.gcr.io/kube-proxy-amd64                v1.11.1             d5c25579d0ff        13 months ago       97.8MB

k8s.gcr.io/kube-apiserver-amd64            v1.11.1             816332bd9d11        13 months ago       187MB

k8s.gcr.io/kube-scheduler-amd64            v1.11.1             272b3a60cd68        13 months ago       56.8MB

k8s.gcr.io/kube-controller-manager-amd64   v1.11.1             52096ee87d0e        13 months ago       155MB

k8s.gcr.io/coredns                         1.1.3               b3b94275d97c        15 months ago       45.6MB

k8s.gcr.io/etcd-amd64                      3.2.18              b8df3b177be2        16 months ago       219MB

k8s.gcr.io/pause                           3.1                 da86e6ba6ca1        20 months ago       742kB

初始化完成之後提示成功

[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

 

Your Kubernetes master has initialized successfully!

 

To start using your cluster, you need to run the following as a regular user:

 

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

You can now join any number of machines by running the following on each node

as root:

 

  kubeadm join 10.100.240.221:6443 --token jgey3y.agbnzkhtxg5vvrer --discovery-token-ca-cert-hash sha256:e012175071edc2dd7e63026bacfc86abf5d08b0cd093a901b256ed7de8ad992a

執行文件

 mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

 

6) 使用kubectl get cs(cs爲componentstatus簡寫)

[root@master ~]# kubectl get cs

NAME                 STATUS    MESSAGE              ERROR

controller-manager   Healthy   ok                   

scheduler            Healthy   ok                   

etcd-0               Healthy   {"health": "true"}   

[root@master ~]#

 

7) 使用kubectl get nodes查看集羣節點,看到master節點處於NotReady狀態,這是因爲集羣網絡組建flannel並未安裝

[root@master ~]# kubectl get node

NAME        STATUS     ROLES    AGE   VERSION

master   NotReady   master   3m    v1.11.1

8) 到GitHub上搜索coreos/flannel倉庫,readme文件裏可以看到如下安裝命令

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

執行該命令會自動拉去kube-proxy和kube-flannel鏡像,稍等片刻重新查看節點狀態即爲Ready狀態

[root@master ~]# kubectl get nodes

NAME        STATUS   ROLES    AGE   VERSION

master   Ready    master   8m    v1.11.1

9) 同時使用kubectl get pods -n kube-system查看組建狀態,-n後的參數指定了命名空間,status爲running代表無誤

[root@master ~]# kubectl get pods -n kube-system

NAME                                READY   STATUS    RESTARTS   AGE

coredns-78fcdf6894-nzttl            1/1     Running   0          9m

coredns-78fcdf6894-ww4jk            1/1     Running   0          9m

etcd-master                      1/1     Running   0          1m

kube-apiserver-master            1/1     Running   0          1m

kube-controller-manager-master   1/1     Running   0          1m

kube-flannel-ds-amd64-hm5bf         1/1     Running   0          2m

kube-proxy-vxcvr                    1/1     Running   0          9m

kube-scheduler-master            1/1     Running   0          1m

至此,master節點部署已經完成

 

2.7、在各node節點安裝docker和kubelet、kubeadm、kubectl工具

a、接下需要做的就是在各node節點安裝dockers和kubelet、kubeadm、kubectl,然後執行kubeadm join那條命令將節點加入集羣

 

b、分別在node1和node2節點上重複2.1-2.5的安裝步驟(相關配置文件可以直接用scp拷貝到兩個節點)

 

c、執行加入集羣的命令,需要添加--ignore-preflight-errors=Swap參數

 kubeadm join 10.100.240.221:6443 --token jgey3y.agbnzkhtxg5vvrer --discovery-token-ca-cert-hash sha256:e012175071edc2dd7e63026bacfc86abf5d08b0cd093a901b256ed7de8ad992a --ignore-preflight-errors=Swap

C同樣由於鏡像被牆的,此處加入集羣會失敗,使用kubectl get pods -n kube-system -o wide命令會看到有兩個pod一直在init和creating

此時使用kubectl describe pod kube-flannel-ds-amd64-ddtnx -n kube-system查看錯誤信息,可以看到還是因爲鏡像拉取失敗的問題,因此將flannel、proxy和pause三個鏡像使用docker save和docker load命令從master拷貝到node1和node2上,這樣kubeadm就會從本地直接使用這些鏡像,避免了拉取失敗的問題

[root@master ~]# docker save quay.io/coreos/flannel:v0.11.0-amd64 k8s.gcr.io/kube-proxy-amd64:v1.11.1 k8s.gcr.io/pause:3.1 -o image.tar

[root@k8snode01 ~]# docker load -i image.tar  

 

d、當兩個節點都加入集羣后,使用kubectl get node命令查看集羣節點是否就緒

[root@master soft]#  kubectl get node

NAME     STATUS   ROLES    AGE   VERSION

master   Ready    master   36m   v1.11.1

node01   Ready    <none>   1m    v1.11.1

node02   Ready    <none>   2m    v1.11.1

使用kubectl get pods -n kube-system -o wide查看pod狀態

[root@master docker]# kubectl get pods -n kube-system -o wide

NAME                                READY   STATUS    RESTARTS   AGE   IP               NODE

coredns-78fcdf6894-cj4fx            1/1     Running   3          21m   10.244.0.11      master

coredns-78fcdf6894-htf6m            1/1     Running   2          20m   10.244.0.10      master

etcd-master                      1/1     Running   1          34m   10.100.240.221   master

kube-apiserver-master            1/1     Running   1          35m   10.100.240.221   master

kube-controller-manager-master   1/1     Running   1          34m   10.100.240.221   master

kube-flannel-ds-amd64-dpzt7         1/1     Running   0          13m   10.100.240.222   k8snode01

kube-flannel-ds-amd64-f2wl2         1/1     Running   1          34m   10.100.240.221   master

kube-flannel-ds-amd64-vf4k5         1/1     Running   1          3m    10.100.240.223   k8snode02

kube-proxy-g9kwx                    1/1     Running   0          13m   10.100.240.222   k8snode01

kube-proxy-gpmz6                    1/1     Running   1          35m   10.100.240.221   master

kube-proxy-khb4g                    1/1     Running   1          3m    10.100.240.223   k8snode02

kube-scheduler-master            1/1     Running   1          35m   10.100.240.221   master

當所有節點都爲ready狀態且pod可以running起來就代表集羣可以正常運轉了

 

 

2.8、測試

[root@master ~]# kubectl run nginx --image=nginx --dry-run

kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.

deployment.apps/nginx created (dry run)

[root@master ~]# kubectl run nginx --image=nginx

kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.

deployment.apps/nginx created

[root@master ~]# kubectl get pod -o wide

NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE

nginx-64f497f8fd-jvhlb   1/1     Running   0          9m    10.244.2.2   node02

[root@master ~]# ping 10.244.2.2

PING 10.244.2.2 (10.244.2.2) 56(84) bytes of data.

64 bytes from 10.244.2.2: icmp_seq=1 ttl=63 time=0.506 ms

查看pod爲running ,部署正常。

 

  1. 附錄:

部署 kubernetes 服務時,出現 Kube DNS 服務反覆重啓現象(錯誤如上),

這很可能是 iptables 規則亂了,我通過執行以下命令解決了,在此記錄:

 

[root@master ~]# kubectl logs  coredns-78fcdf6894-cj4fx   -n kube-system

E0826 15:16:40.867018       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host

E0826 15:16:40.867142       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host

解決:

[root@master docker]# systemctl stop kubelet

[root@master docker]# systemctl stop docker

[root@master docker]#

[root@master docker]#

[root@master docker]# iptables --flush

[root@master docker]# iptables -tnat --flush

[root@master docker]# systemctl start docker

[root@master docker]# systemctl start kubelet

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章