部署架構圖:
一、安裝docker
此時只支持docker18.06
1.1.添加源:
# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
1.2.查看包:
# yum list docker-ce.x86_64 --showduplicates | sort -r
1.3.安裝docker
# yum -y install docker-ce-18.06.0.ce-3.el7
三個服務器都需要安裝此版本的docker
二、安裝kubelet、kubeadm 和 kubectl
kubelet 運行在 Cluster 所有節點上,負責啓動 Pod 和容器。
kubeadm 用於初始化 Cluster.
2.1.添加阿里源,國外的你懂的:
# vim /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0
2.2.安裝kubelet kubeadm kubectl
# yum install -y kubelet kubeadm kubectl
啓用kubelet:
# systemctl enable kubelet.service
關閉swap:
# swapoff -a
2.3.使用kubeadm創建cluster
2.3.1.初始化master
kubeadm init --apiserver-advertise-address 192.168.2.120 --pod-network-cidr=10.244.0.0/16
參數說明:
--apiserver-advertise-address string The IP address the API Server will advertise it's listening on. Specify '0.0.0.0' to use the address of the default network interface. -pod-network-cidr string Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
報錯:
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address 192.168.2.120 --pod-network-cidr=10.244.0.0/16 [init] Using Kubernetes version: v1.13.2 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp 64.233.188.82:443: connect: connection timed out , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp 64.233.188.82:443: connect: connection timed out , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp 64.233.188.82:443: connect: connection timed out , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.13.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp 64.233.188.82:443: connect: connection timed out , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp 64.233.188.82:443: connect: connection timed out , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp 64.233.188.82:443: connect: connection timed out , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp 64.233.188.82:443: connect: connection timed out , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
解決:
docker.io倉庫對google的容器做了鏡像,可以通過下列命令下拉取相關鏡像:
docker pull mirrorgooglecontainers/kube-apiserver:v1.13.2 docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.2 docker pull mirrorgooglecontainers/kube-scheduler:v1.13.2 docker pull mirrorgooglecontainers/kube-proxy:v1.13.2 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd:3.2.24 docker pull coredns/coredns:1.2.6
對鏡像進行tag:
docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.13.2 k8s.gcr.io/kube-apiserver:v1.13.2 docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.13.2 k8s.gcr.io/kube-controller-manager:v1.13.2 docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.13.2 k8s.gcr.io/kube-scheduler:v1.13.2 docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.13.2 k8s.gcr.io/kube-proxy:v1.13.2 docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag docker.io/mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 docker tag docker.io/coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
再次初始化:
2.4.配置kubectl
2.4.1.在master上,切換到ckl:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
2.4.2.添加flannel網絡
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
2.5.配置node1在兩個node上執行
2.5.1.在node1上執行:
# kubeadm join 192.168.2.120:6443 --token cr4qie.4izx0ry4bmgzbxgg --discovery-token-ca-cert-hash sha256:3ac0c3aed126752cf0057559609a81d1608b8174dde20c2af559873894c80895
2.5.2.在node2上執行:
# kubeadm join 192.168.2.120:6443 --token cr4qie.4izx0ry4bmgzbxgg --discovery-token-ca-cert-hash sha256:3ac0c3aed126752cf0057559609a81d1608b8174dde20c2af559873894c80895
2.6.在master查看節點狀態:
添加命令補全:
# yum install -y bash-completion # find / -name "bash_completion" /usr/share/bash-completion/bash_completion # source /usr/share/bash-completion/bash_completion # source <(kubectl completion bash)
三個節點都是NotReady,需要啓動若干組件,這些組件運行在pod中,查看pod:
$ kubectl get pod --all-namespaces
等待kubernets下載鏡像,會重試,確保鏡像地址可以被下載
等待一段時間:
再查看node狀態: