一 、集羣情況如下,master作爲k8s的主節點,在slave1上搭建docker的私庫
Master |
192.168.235.128 |
etcd、kube-apiserver、kube-controller-manager、 kubescheduler、docker |
slave1 |
192.168.235.129 |
kube-proxy、kubelet、docker、docker私庫 |
slave2 |
192.168.235.131 |
kube-proxy、kubelet、docker |
二、配置
1、每個節點上關閉防火牆
(1)關閉防火牆
systemctl stop firewalld.service
(2)禁止firewall開機啓動
systemctl disable firewalld.service
(3)查看默認防火牆狀態(關閉後顯示not running ,開啓後顯示 running)
firewall-cmd –state
2、每個節點上修改權限和建立文件夾
(1)sudo chmod –R +777 /opt
(2)sudo chmod –R +777 /usr/bin/
(3)mkdir /opt/k8s
3、每個節點上配置docker
(1)設置yum源
拷貝“每個節點”文件夾下/etc/yum.repos.d/docker.repo中的docker.repo文件到/etc/yum.repos.d/目錄
(2)安裝docker
yum install docker-engine
(3)安裝後查看docker版本
docker –v
(4)是否需要設置國內鏡像??
4、在slave1上配置docker私服
(1)docker pull registry
(2)配置國內源和設置私服信賴
拷貝“slave節點”文件夾下文件夾內的daemon.json到slave1的/etc/docker/目錄下。(文件中需要配置私服的ip)
(3)重啓docker並設置開機啓動
systemctl restart docker
systemctl enable docker
(4)啓動私服
docker run -di --name=registry -p 5000:5000 registry
(5)提交鏡像到私服
docker pull kubernetes/pause
docker tag docker.io/kubernetes/pause:latest 192.168.126.143:5000/google_containers/pauseamd64.3.0
docker push 192.168.126.143:5000/google_containers/pause-amd64.3.0
(6)查看是否提交成功
http://192.168.235.129:5000/v2/_catalog
5、在master上配置etcd
(1)拷貝“master節點”文件夾etcd-v3.3.9-linux-amd64.tar到/opt/k8s並解壓,tar -xzvf etcd-v3.3.9-linux-amd64.tar,之後將解壓目錄內的etcd和etcdctl複製到/usr/bin目錄
(2)拷貝“master節點”中的etcd.service 文件到/usr/lib/systemd/system/,並創建目錄mkdir -p /var/lib/etcd/
(3)啓動與測試etcd服務
systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service
etcdctl cluster-health
6、在master上配置kube-apiserver
(1)拷貝“每個節點”文件夾kubernetes-server-linux-amd64.tar到/opt/k8s並解壓,tar -xzvf kubernetes-server-linux-amd64.tar,之後將/opt/k8s/kubernetes/server/bin目錄下的kube-apiserver kube-controller-manager kube-scheduler kubectl複製到/usr/bin目錄
cd /opt/k8s/kubernetes/server/bin
cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/
(2)拷貝“master節點”文件夾中的kube-apiserver.service文件到/usr/lib/systemd/system/
(3)創建目錄:mkdir /etc/kubernetes並拷貝“master節點”文件夾中的apiserver文件到/etc/kubernetes
7、在master上配置kube-controller-manager
(1)拷貝“master節點”文件夾中的kube-controller-manager.service文件到/usr/lib/systemd/system/
(2)拷貝“master節點”文件夾中的controller-manager文件到/etc/kubernetes/(文件中需要設置—master爲主節點的ip地址)
8、在master上配置kube-scheduler
(1)拷貝“master節點”文件夾中的kube-scheduler.service文件到/usr/lib/systemd/system/
(2)拷貝“master節點”文件夾中的scheduler文件到/etc/kubernetes/(文件中需要設置—master爲主節點的ip地址)
9、在slave1和slave2上配置kubelet
(1)拷貝“每個節點”文件夾kubernetes-server-linux-amd64.tar到/opt/k8s並解壓,tar -xzvf kubernetes-server-linux-amd64.tar,之後將/opt/k8s/kubernetes/server/bin目錄下的kubelet、kube-proxy複製到/usr/bin目錄
cd /opt/k8s/kubernetes/server/bin
cp kubelet kube-proxy /usr/bin/
(2)拷貝“slave節點”文件夾中的kubelet.service文件到/usr/lib/systemd/system/並創建文件sudo chmod -R +777 /var/lib
mkdir -p /var/lib/kubelet
(3)創建/ect/kubernetes目錄mkdir /ect/kubernetes並拷貝“slave節點”文件夾中的kubelet文件到/ect/kubernetes(文件中需要設置--hostname-override爲本機的ip地址)
(4)拷貝“slave節點”文件夾中的kubeconfig文件到/ect/kubernetes(文件中需要設置server爲master的ip地址)
10、在slave1和slave2上配置kube-proxy
(1)拷貝“slave節點”文件夾中的kube-proxy.service文件到/usr/lib/systemd/system/
(2)拷貝“slave節點”文件夾中的proxy文件到/ect/kubernetes(文件中需要設置--master爲master的ip地址)
11、master啓動
完成以上配置後,按順序啓動服務
systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
檢查每個服務的健康狀態:
systemctl status kube-apiserver.service
systemctl status kube-controller-manager.service
systemctl status kube-scheduler.service
12、slave啓動
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
13、測試
(1)拷貝“master節點”文件夾下的nginx-rc.yaml和nginx-svc.yaml到/opt/k8s/ 文件夾中
(2)提交rc和service描述文件,創建服務
kubectl create -f nginx-rc.yaml
kubectl create -f nginx-svc.yaml
(3)查看當前命名空間下的pod
kubectl get pods
(4)查看所有容器列表
kubectl get all
14、更多細節請參考其餘四個文件。
15、kubectl get pods時no resource found的問題
(1)在slave節點上修改/etc/kubernetes/kubelet配置爲KUBELET_ARGS="--pod_infra_container_image=192.168.126.143:5000/google_containers/pauseamd64.3.0"(和slave1上私服提交的鏡像相對應,ip爲私服所在slave1的ip)
(2)在master節點上修改/etc/kubernetes/apiserver,找到KUBE_ADMISSION_CONTROL,去掉SecurityContextDeny,ServiceAcc
ount
注:直接拷貝文件過去應該是已經修改好的,應該是不需要設置,這裏寫出來是爲了注意一下。