kubernetes1.12二進制離線安裝總結

一、安裝介紹

安裝是kubernetes1.12版本,採用二進制離線方式安裝,安裝kubernetes基本組件,沒有安裝證書。本安裝文檔適合於學習kubernetes環境搭建

二、資源準備

kubernetes1.12安裝包 :
安裝包下載地址 https://pan.baidu.com/s/15dSgsOVwmmk9gD6tbytTuQ

文件名稱 選擇
kubernetes-node-linux-amd64.tar.gz 必須
kubernetes-server-linux-amd64.tar.gz 必須
kubernetes-client-linux-amd64.tar.gz 可選
kubernetes-client-windows-amd64.tar.gz 可選
kubernetes.tar.gz 可看

服務器準備:4臺centos7服務器

服務器名稱 IP地址
master 192.168.0.5
node1 192.168.0.6
node2 192.168.0.7
node3 192.168.0.8

三、安裝步驟

1、安裝etcd分佈式鍵值對存儲

yum install -y etcd.x86_64 

修改 /lib/systemd/system/etcd.service 服務配置文件,增加服務啓動參數,添加etcd羣集配置參數,添加參數見如下黃色標記處

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\""
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

修改 /etc/etcd/etcd.conf配置文件

#數據存儲目錄
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#羣集通訊URL,此處IP地址爲本機IP地址
ETCD_LISTEN_PEER_URLS="http://192.168.0.5:2380"
#供外部客戶端使用的url, 此處IP地址爲本機IP地址
ETCD_LISTEN_CLIENT_URLS="http://192.168.0.5:2379,http://0.0.0.0:2379"
#etcd節點名稱
ETCD_NAME="master"
#廣播給集羣內其他成員訪問的URL 此處IP地址爲本機IP地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.0.5:2380"
#廣播給外部客戶端使用的url  此處IP地址爲本機IP地址
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.5:2379"
#初始集羣成員列表  此處IP地址爲所有節點的名稱與對應的IP地址
ETCD_INITIAL_CLUSTER="master=http://192.168.0.5:2380,node1=http://192.168.0.6:2380,node2=http://192.168.0.7:2380,node3=http://192.168.0.8:2380"
#集羣名稱
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-1"
#初始集羣狀態,new爲新建集羣
ETCD_INITIAL_CLUSTER_STATE="new"

配置開機啓動

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd 

檢查安裝是否成功
查看成員

etcdctl member list

在這裏插入圖片描述
每個節點上執行查看健康狀態

etcdctl cluster-health

在這裏插入圖片描述

問題: 檢查羣集只有本機節點信息,沒有其他節點信息

配置好集羣啓動服務器不生效:原因是etcd服務已經初始化etcd數據庫,這時需要刪除數據庫文件 “/var/lib/etcd/default.etcd/member/ ” 此目錄下的所有文件,重啓服務時會報錯不用管將所有節點重啓完成,在檢查所有節點服務是否正常,沒有啓動的重新啓動就行了

2、安裝master
將kubernetes-server-linux-amd64.tar.gz複製到master服務器上,解壓文件將 kube-apiserver、kube-controller-manager、kube-scheduler、kubectl 這幾個文件複製到 /usr/bin/ 這個目錄下

創建服務文件
創建kube-apiserver.service

  [sysadmin@master ~]$ cat /lib/systemd/system/kube-apiserver.service 
    [Unit]
    Description=Kubernetes API Server 
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes 
    After=network.target 
    After=etcd.service 
    [Service]
    EnvironmentFile=-/etc/kubernetes/apiserver 
    ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS 
    Restart=on-failure 
    Type=notify 
    LimitNOFILE=65536 
    [Install] 
    WantedBy=multi-user.target

創建kube-controller-manager.service

[sysadmin@master ~]$ cat /lib/systemd/system/kube-controller-manager.service 
[Unit] 
Description=Kubernetes Controller Manager 
Documentation=https://github.com/GoogleCloudPlatform/kubernetes 
[Service]
EnvironmentFile=-/etc/kubernetes/controller-manager 
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS 
Restart=on-failure 
LimitNOFILE=65536 
[Install] 
WantedBy=multi-user.target

創建kube-scheduler.service

[sysadmin@master ~]$ cat /lib/systemd/system/kube-scheduler.service 
[Unit]
Description=Kubernetes Scheduler Plugin 
Documentation=https://github.com/GoogleCloudPlatform/kubernetes 
[Service] 
EnvironmentFile=-/etc/kubernetes/scheduler 
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS 
Restart=on-failure 
LimitNOFILE=65536 
[Install] 
WantedBy=multi-user.target

創建kubernetes master組件配置文件
創建apiserver

[sysadmin@master kubernetes]$ cat apiserver 
KUBE_API_ARGS="--storage-backend=etcd3 --etcd-servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=169.169.0.0/16 --service-node-port-range=1-65535 --admission-control=NamespaceLifecycle,LimitRanger,ResourceQuota --logtostderr=false --log-dir=/var/log/kubernets/log --v=2"

創建controller-manager

[sysadmin@master kubernetes]$ cat controller-manager 
KUBE_CONTROLLER_MANAGER_ARGS="--master=http://127.0.0.1:8080 --logtostderr=true --log-dir=/var/log/kubernets/log --v=2"

創建scheduler

sysadmin@master kubernetes]$ cat scheduler 
KUBE_SCHEDULER_ARGS="--master=http://127.0.0.1:8080 --logtostderr=false --log-dir=/var/log/kubernets/log --v=2"

配置開機啓動

systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl enable kube-scheduler.service 
systemctl enable kube-scheduler.service 
systemctl start kube-apiserver.service
systemctl start kube-scheduler.service 
systemctl start kube-scheduler.service 

驗證Master是否安裝成功

[sysadmin@master ~]$ kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   

3、安裝node節點
將kubernetes-node-linux-amd64.tar.gz複製所有的node節點服務器上解壓,將 kubectl、kubelet、kube-proxy 複製到/usr/bin/目錄下
創建 kubelet.service

[sysadmin@ucentosk8snode1 ~]$ cat /lib/systemd/system/kubelet.service 
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
#手動創建此目錄
WorkingDirectory=-/var/kubeletwork
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target

創建 kube-proxy.service

[sysadmin@ucentosk8snode1 ~]$ cat /lib/systemd/system/kube-proxy.service 
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.service
Requires=network.service
[Service]
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

創建 kubelet.kubeconfig

[sysadmin@ucentosk8snode1 ~]$ cat /etc/kubernetes/kubelet.kubeconfig 
apiVersion: v1
kind: Config
clusters:
  - cluster:
      server: http://192.168.0.5:8080
    name: local
contexts:
  - context:
      cluster: local
    name: local

創建kubelet
這裏不在重複去其他node節點配置文件,其他節點 --hostname-override= 和 address= 值不一樣,換成對用的機器IP地址

[sysadmin@ucentosk8snode1 ~]$ cat /etc/kubernetes/kubelet
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig --hostname-override=192.168.0.6  --logtostderr=true --log-dir=/var/log/kubernets/log  --v=2  --address=192.168.0.6  --port=10250 --fail-swap-on=false --pod-infra-container-image=zengshaoyong/pod-infrastructure"

在此問題注意 kubernetes pod基礎鏡像問題,在國內是無法直接連接谷歌k8s鏡像倉庫的,需要手動下pod的docker基礎鏡像,配置這個屬性 --fail-swap-on=false --pod-infra-container-image=zengshaoyong/pod-infrastructure

創建proxy

[sysadmin@ucentosk8snode1 ~]$ cat /etc/kubernetes/proxy 
KUBE_PROXY_ARGS="--master=http://192.168.0.5:8080 --hostname-override=node1 --v=2 --logtostderr=true --log-dir=/var/log/kubernets/log"

配置開機啓動

systemctl daemon-reload
systemctl enable kubelet
systemctl enable kube-proxy
systemctl start kubelet
systemctl start kube-proxy

檢查服務是否啓動成功

systemctl status kubelet
systemctl status kube-proxy

檢查節點狀態

kubectl get nodes

4、安裝flannel網路組件
在所有節點上安裝flannel(包括master 、node節點)

yum -y install flannel

修改配置文件

[sysadmin@ucentosk8snode1 ~]$ cat /etc/sysconfig/flanneld
# Flanneld configuration options  
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379"
# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/etc/kubernetes/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

初始化IP地址段信息數據
執行如下命令,在etcd中初始化數據

etcdctl set /etc/kubernetes/network/config '{"Network": "172.20.0.0/16"}'

執行如下命令,創建flannel服務並啓動,重啓docker、kube-apiserver、kube-apiserver、kube-apiserver、kubelet、kube-proxy
在master上執行如下命令

systemctl daemon-reload
systemctl enable flanneld.service 
systemctl start flanneld.service 
systemctl restart docker.service
systemctl restart kube-apiserver.service
systemctl restart kube-apiserver.service
systemctl restart kube-apiserver.service 

在node上執行如下命令

systemctl daemon-reload
systemctl enable flanneld.service 
systemctl start flanneld.service 
systemctl restart docker.service
systemctl restart kubelet.service
systemctl restart kube-proxy.service

檢查網卡是否正確
執行 ifconfig 命令檢查docker0網卡與flannel0網卡是否在同一個IP地址段

 [sysadmin@ucentosk8snode1 ~]$ ifconfig 
    docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1472
            inet 169.168.93.1  netmask 255.255.255.0  broadcast 169.168.93.255
            inet6 fe80::42:59ff:fe3c:1080  prefixlen 64  scopeid 0x20<link>
            ether 02:42:59:3c:10:80  txqueuelen 0  (Ethernet)
            RX packets 104  bytes 9582 (9.3 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 191563  bytes 36020318 (34.3 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472
            inet 169.168.93.0  netmask 255.255.0.0  destination 169.168.93.0
            inet6 fe80::f6df:f17:7410:cb99  prefixlen 64  scopeid 0x20<link>
            unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
            RX packets 136  bytes 18806 (18.3 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 171  bytes 15494 (15.1 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

要是不同一個IP地址段,修改docker服務配置文件,找到/run/flannel/docker此文件, 添加如下參數,重啓docker服務,在檢測docker0網卡與flannel0網卡是否在同一網段

[sysadmin@ucentosk8snode1 ~]$ cat /lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/bin/dockerd  $DOCKER_OPT_BIP  $DOCKER_OPT_IPMASQ  $DOCKER_OPT_MTU  $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章