一、環境介紹:
參考文章:
使用 kubeadmin 離線部署 kubernetes1.9
kubeadm安裝1.9版本
使用kubeadm離線部署kubernetes v1.9.0
1、操作系統:
esxi6.7虛擬機:Centos7.3
2、主機名
node1 192.168.123.123
node2 192.168.123.122
node3 192.168.123.121
二、配置基礎環境
1、配置node3結點與其他結點互信
[root@node3 ~]# ssh-keygen -t rsa
[root@node3 ~]# ssh-copy-id node1
[root@node3 ~]# ssh-copy-id node2
[root@node3 ~]# ssh-copy-id node3
2、下載網盤的離線包,解壓後複製到所有結點
鏈接:https://pan.baidu.com/s/13AyMVUBn2Cr8sb6apUnzrg 密碼:loyc
3、安裝docker軟件包(所有結點)
#安裝必要的軟件
yum install -y vim ntp
# 解壓下載得到的k8s.tar.gz
tar zxf k8s.tar.gz /root/
# 使用本地安裝docker1.17版本
cd ~/k8s/rpm/
yum localinstall -y docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
# 啓動docker
systemctl start docker && systemctl enable docker
4、根據kubernetes需要調整centos(所有節點)
# 關閉防火牆
systemctl disable firewalld && systemctl stop firewalld
# 關閉selinux
setenforce 0
sed -i 's#enforcing#disabled#g' /etc/selinux/config
# 關閉swap
swapoff -a && sysctl -w vm.swappiness=0
vim /etc/fstab 註釋掉swap
# 同步時間
ntpdate cn.pool.ntp.org
4、系統路由參數,防止kubeadm報路由警告(所有結點)
echo "
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
" >> /etc/sysctl.conf
sysctl -p
5、導入docker鏡像(所有結點)
[root@node3 k8s]# cd /root/k8s/
[root@node3 k8s]# find . -name "*.tar" -exec docker image load -i {} \;
6、安裝RPM包(所有結點)
rpm -ivh socat-1.7.3.2-2.el7.x86_64.rpm
rpm -ivh kubernetes-cni-0.6.0-0.x86_64.rpm kubelet-1.9.9-9.x86_64.rpm kubectl-1.9.0-0.x86_64.rpm
rpm -ivh kubeadm-1.9.0-0.x86_64.rpm
7、修改kubernetes使用docker默認driver(所有結點)
sed -i 's#systemd#cgroupfs#g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
8、配置node3結點,使用kubectl命令補全功能
[root@node3 k8s]# echo "source <(kubectl completion bash)" >> ~/.bashrc
三、使用kubeadm生成kubernetes集羣
1、node3結點,啓動kubelet服務
[root@node3 rpm]# systemctl enable kubelet && systemctl start kubelet
2、node3結點,初始化kubernetes集羣
[root@node3 rpm]# kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.224.0.0/16 --token-ttl=0 --ignore-preflight-errors=all
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each node
as root:
#kubeadm join非常重要,需要記錄保存下來,用於添加節點,丟失不能找回
kubeadm join --token bb3b01.42f41afdd2623461 192.168.123.123:6443 --discovery-token-ca-cert-hash sha256:8b7188df2ec8f3a1617c4eb2482712bbbcc97320422a4c5ecad0c52fa16905b0
注意看輸出,這裏需要在master設置:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
如果在master節點kubeadm join出錯,可用kubeadm reset重置
kubeadm reset
kubeadm init初始化失敗命令:
初始化 master
systemctl stop kubelet
# 注意: 下面這條命令會幹掉所有正在運行的 docker 容器,
# 如果要進行重置操作,最好先確定當前運行的所有容器都能幹掉(幹掉不影響業務),
# 否則的話最好手動刪除 kubeadm 創建的相關容器(gcr.io 相關的)
docker rm -f -v $(docker ps -q)
find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v
rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd
3、按上述操作,驗證加入集羣
分別在node1、node2同步時間後使用kubeadm join加入集羣
ntpdate cn.pool.ntp.org
kubeadm join --token bb3b01.42f41afdd2623461 192.168.123.123:6443 --discovery-token-ca-cert-hash sha256:8b7188df2ec8f3a1617c4eb2482712bbbcc97320422a4c5ecad0c52fa16905b0
[root@node3 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
node1 NotReady <none> 56m v1.9.0
node2 NotReady <none> 56m v1.9.0
node3 NotReady master 58m v1.9.0
4、使用calico插件,修改CIDR爲初始化使用的網段;
...
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16" # 原 value: "10.244.0.0/16"
...
[root@node3 k8s]# kubectl create -f calico.yaml
[root@node3 k8s]# kubectl get node
NAME STATUS ROLES AGE VERSION
node1 Ready <none> 1h v1.9.0
node2 Ready <none> 1h v1.9.0
node3 Ready master 1h v1.9.0
[root@node3 k8s]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-etcd-tkmpj 1/1 Running 0 7m
kube-system calico-kube-controllers-559b575f97-2zngp 1/1 Running 10 11m
kube-system calico-node-6gxx8 2/2 Running 10 11m
kube-system calico-node-rmd7h 2/2 Running 11 11m
kube-system calico-node-tjwct 2/2 Running 11 11m
kube-system etcd-node3 1/1 Running 1 1h
kube-system kube-apiserver-node3 1/1 Running 1 1h
kube-system kube-controller-manager-node3 1/1 Running 1 1h
kube-system kube-dns-6f4fd4bdf-5vw7d 3/3 Running 3 1h
kube-system kube-proxy-d994x 1/1 Running 1 1h
kube-system kube-proxy-k2bdk 1/1 Running 1 1h
kube-system kube-proxy-ksp6b 1/1 Running 1 1h
kube-system kube-scheduler-node3 1/1 Running 1 1h
五、添加服務等內容
參考文章:
安裝部署 Kubernetes 集羣