k8s ui 服務搭建

一、環境配置

master 172.16.101.199 docker,apiserver, controller-manager, scheduler
etcd 172.16.101.199 etcd
node1 172.16.101.221 flannel, docker, kubelet, kube-proxy
node2 172.16.101.221 flannel, docker, kubelet, kube-proxy

1/設置hosts文件
172.16.101.199 master
172.16.101.199 etcd
172.16.101.220 node1
172.16.101.221 node2

2、基礎設置
2、1 關閉防火牆
2、2 關閉selinux
2、3 設置hosts
2.4 啓用iPv4轉發
CentOS7 下可編輯配置文件/etc/sysctl.conf

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

執行sudo sysctl -p 立刻生效。
2.5 禁用SWAP:
永久禁用swap可以直接修改/etc/fstab文件,註釋掉swap項
2.6 免祕鑰登錄

2、master:

(1)、安裝docker

CentOS7

安裝依賴包

yum install -y yum-utils device-mapper-persistent-data lvm2
#添加Docker軟件包源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#更新yum包索引
yum makecache fast
#安裝Docker CE
yum install docker-ce -y
#啓動
systemctl start docker
systemctl enable docker

#卸載方法一
yum remove docker-ce
rm -rf /var/lib/docker

#卸載docker方法二:
yum list installed | grep docker
刪除安裝包
sudo yum -y remove docker-engine.x86_64

(2)安裝kubernets.flannel.etcd
yum install kubernetes-master etcd flannel-y
(3)配置etcd
cat /etc/etcd/etcd.conf |egrep -v "^#|^$"

ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ## 監聽地址端口
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379" ## etcd集羣配置;多個etcd服務器,直接在後面加url

##啓動etcd服務
#systemctl start etcd

(4)配置kubernetes
#cat /etc/kubernetes/kubernetes.conf
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" ## kube啓動時綁定的地址
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379" ## kube調用etcd的url
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=172.17.0.0/16" ## 此地址是docker容器的地址段
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
KUBE_API_ARGS=""

#cat config |egrep -v "^#|^$"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://172.16.101.199:8080" ## kube master api url
(5)配置flanned
#cat /etc/sysconfig/flanneld

FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
FLANNEL_ETCD_PREFIX="/kube/network" 注意:kube

特別注意項:這條選項很重要

etcdctl mk /kube/network/config '{"Network":"172.17.0.0/16"}' ## 注意此處的ip和上文中出現的ip地址保持一致.

報錯問題:
E0808 11:09:44.387201 10537 network.go:102] failed to retrieve network config: 100: Key not found (/kube) [3]

3、node1-2 安裝

1). 安裝軟件包.
#yum install kubernetes-node flannel -y #默認安裝docker-1.13.1版本,其需要啓動docker就可以了
systemctl enable docker
systemctl start docker
docker version
2)配置flannel

#cat /etc/sysconfig/flanneld

FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
FLANNEL_ETCD_PREFIX="/kube/network" 注意:kube

systemctl start flanneld

3)配置kubelet
#cd /etc/kubernetes
#cat config |egrep -v "^#|^$"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://172.16.101.199:8080" ## kube master api url

#cat kubelet |egrep -v "^#|^$"
KUBELET_ADDRESS="--address=0.0.0.0" ## kubelet 啓動後綁定的地址
KUBELET_PORT="--port=10250" ## kubelet 端口
KUBELET_HOSTNAME="--hostname-override=172.16.101.220" ##kubelet的hostname,在master執行kubectl get nodes顯示的名字
KUBELET_API_SERVER="--api-servers=http://172.16.101.199:8080" ## kube master api url
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""

4、啓動順序

master:
systemctl start docker #啓動
systemctl status docker #檢測
systemctl start etcd
systemctl status etcd
systemctl start flanneld
systemctl status flanneld
查看ip,會出現flannel0的網絡接口設備,該地址和docker0地址是一致的,如果不一致請確認以上服務是否正常啓動

啓動順序:kube-apiserver居首.
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler

node:
systemctl start docker.service
systemctl start kube-proxy
systemctl start kubelet

5、檢測配置正確性
訪問http://kube-apiserver:port
http://172.16.101.199:8080 查看所有請求url
http://172.16.101.199:8080/healthz/ping 查看健康狀況

6、開啓k8s dashboard:

master:

1). 在master上驗證服務.
#kubectl get nodes ## 獲取k8s客戶端.
NAME STATUS AGE
172.16.101.220 Ready 1h
172.16.101.221 Ready 1h

#kubectl get namespace ## 獲取k8s所有命名空間
NAME STATUS AGE
default Active 1h
kube-system Active 1h


新建kube-dashboard.yaml

cd /usr/local/src/docker/

kubectl delete -f kubernetes-dashboard.yaml
kubectl get pods --namespace=kube-system

kubectl get pod --all-namespaces
kubectl describe pods kubernetes-dashboard-2215670400-w0j11 --namespace=kube-system

客戶端:
systemctl restart flanneld
systemctl start kube-proxy
systemctl start kubelet

node1-2

客戶端執行:
yum install python-rhsm
yum install rhsm
wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem

docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest

#基礎命令
kubectl get po/svc/cm/rc : 查看容器
kubectl describe po name :查看詳情
kubectl delete po name :刪除資源
-o wide : 多顯示幾列信息
--all-namespaces : 所有命名空間
-n name : 指定命名空間(default可以不帶此參數)
kubectl apply/create -f aaa.yaml : 執行yml文件
kubectl exec 容器名稱 -it -- bash : 進入容器
exit : 退出

kubectl delete po name :刪除資源


1、問題一
解決不能刪除問題:
[root@localhost docker]# kubectl create -f kubernetes-dashboard.yaml
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": deployments.extensions "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": services "kubernetes-dashboard" already exists
解決方法:
kubectl delete namespace kube-system
kubectl delete -f kubernetes-dashboard.yaml

https://www.jb51.net/article/94343.htm/

2、問題二
解決超時問題:
Error: 'dial tcp 172.17.71.2:9090: getsockopt: no route to host'
Trying to reach: 'http://172.17.71.2:9090/'
getsockopt: connection timed out’問題

如果安裝的docker版本爲1.13及以上,並且網絡暢通,flannel、etcd都正常,但還是會出現getsockopt: connection timed out'的錯誤,則可能是iptables配置問題。具體問題:

Error: 'dial tcp 10.233.50.3:8443: getsockopt: connection timed out

如果安裝的docker版本爲1.13及以上,並且網絡暢通,flannel、etcd都正常,但還是會出現getsockopt: connection timed out'的錯誤,則可能是iptables配置問題。具體問題:

Error: 'dial tcp 10.233.50.3:8443: getsockopt: connection timed out

docker從1.13版本開始,可能將iptables FORWARD chain的默認策略設置爲DROP,從而導致ping其他Node上的Pod IP失敗,遇到這種問題時,需要手動設置策略爲ACCEPT:

sudo iptables -P FORWARD ACCEPT

使用iptables -nL命令查看,發現Forward的策略還是drop,可是我們明明執行了iptables -P FORWARD ACCEPT。原來,docker是在這句話執行之後啓動的,需要每次在docker之後再執行這句話。。。這麼做有點太麻煩了,所以我們修改下docker的啓動腳本:

vi /usr/lib/systemd/system/docker.service

[Service]
Type=notify

ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS $DOCKER_OPTS $DOCKER_DNS_OPTIONS

#添加這行操作,在每次重啓docker之前都會設置iptables策略爲ACCEPT
ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT

ExecReload=/bin/kill -s HUP $MAINPID

在啓動文件中的 [Service] 下添加一行配置,即上面代碼中的配置即可。

然後重啓docker,再次查看dashboard網頁。

這個問題在實在解決不了
master裝一個node客戶端

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章