k8s,盤他!k8s多master節點使用二進制部署實操

前言

一:k8s二進制方式多節點部署

1.1:環境介紹

  • 下面拓撲圖還有一個harbor倉庫沒有說明,到時候部署在單獨的一臺服務器上即可

  • mark

  • 主機分配

  • 主機名 IP地址 資源分配 部署的服務
    nginx01 192.168.233.128 2G+4CPU nginx、keepalived
    nginx02 192.168.233.129 2G+4CPU nginx、keepalived
    VIP 192.168.233.100
    master 192.168.233.131 1G+2CPU apiserver、scheduler、controller-manager、etcd
    master02 192.168.233.130 1G+2CPU apiserver、scheduler、controller-manager
    node01 192.168.233.132 2G+4CPU kubelet、kube-proxy、docker、flannel、etcd
    node02 192.168.233.133 2G+4CPU kubelet、kube-proxy、docker、flannel、etcd

1.2:master02節點操作

  • 開局優化

    關閉防火牆,關閉核心防護,關閉網絡管理功能(生成環境中一定要關閉它)

    [root@localhost ~]# hostnamectl set-hostname master02	'//修改主機名'
    [root@localhost ~]# su
    [root@master02 ~]# 
    [root@master02 ~]# systemctl stop firewalld && systemctl disable firewalld	'//關閉防火牆'
    Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
    Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    [root@master02 ~]#  setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config	'//關閉核心防護'
    [root@master02 ~]# systemctl stop NetworkManager && systemctl disable NetworkManager	'//關閉網絡管理功能'
    Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
    Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
    Removed symlink /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service.
    
    
  • master節點操作,將master節點的kubernetes配置文件和啓動腳本複製到master02節點

    [root@master ~]# scp -r /opt/kubernetes/ [email protected]:/opt/
    [root@master ~]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service [email protected]:/usr/lib/systemd/system/
    
    
  • master02上修改apiserver配置文件中的IP地址

    [root@master02 ~]# cd /opt/kubernetes/cfg/
    [root@master02 cfg]# ls
    kube-apiserver  kube-controller-manager  kube-scheduler  token.csv
    [root@master02 cfg]# vim kube-apiserver
    
    KUBE_APISERVER_OPTS="--logtostderr=true \
    --v=4 \
    --etcd-servers=https://192.168.233.131:2379,https://192.168.233.132:2379,https://192.168.233.133:2379 \
    --bind-address=192.168.233.130 \	'//修改此處的綁定IP地址'
    --secure-port=6443 \
    --advertise-address=192.168.233.130 \	'//修改此處的IP地址'
    ...省略
    
    
  • 將master節點的etcd證書複製到master02節點(master02上一定要有etcd證書,用來與etcd通信)

    [root@master ~]# scp -r /opt/etcd/ [email protected]:/opt
    
  • master02節點查看etcd證書,並啓動三個服務

    [root@master02 ~]# tree /opt/etcd
    /opt/etcd
    ├── bin
    │   ├── etcd
    │   └── etcdctl
    ├── cfg
    │   └── etcd
    └── ssl
        ├── ca-key.pem
        ├── ca.pem
        ├── server-key.pem
        └── server.pem
    
    3 directories, 7 files
    [root@master02 ~]# systemctl start kube-apiserver.service
    [root@master02 ~]# systemctl status kube-apiserver.service
    [root@master02 ~]# systemctl enable kube-apiserver.service
    [root@master02 ~]# systemctl start kube-controller-manager.service
    [root@master02 ~]# systemctl status kube-controller-manager.service
    [root@master02 ~]# systemctl enable kube-controller-manager.service
    [root@master02 ~]# systemctl enable kube-scheduler.service
    [root@master02 ~]# systemctl start kube-scheduler.service
    [root@master02 ~]# systemctl status kube-scheduler.service
    
    
  • 添加環境變量並查看狀態

    [root@master02 ~]# echo export PATH=$PATH:/opt/kubernetes/bin >> /etc/profile
    [root@master02 ~]# source /etc/profile
    [root@master02 ~]# kubectl get node
    NAME              STATUS   ROLES    AGE   VERSION
    192.168.233.132   Ready    <none>   23h   v1.12.3
    192.168.233.133   Ready    <none>   23h   v1.12.3
    
    

1.2:nginx負載均衡集羣部署

  • 兩個nginx主機開局優化(僅展示nginx01的操作):關閉防火牆和核心防護,編輯nginx yum源

    [root@localhost ~]# hostnamectl set-hostname nginx01	'//修改主機嗎'
    [root@localhost ~]# su
    [root@nginx01 ~]#  
    [root@nginx01 ~]# systemctl stop firewalld && systemctl disable firewalld	'//關閉防火牆與核心防護'
    [root@nginx01 ~]# setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config	
    [root@nginx01 ~]# vi /etc/yum.repos.d/nginx.repo 	'//編輯nginx的yum源'
    [nginx]
    name=nginx.repo
    baseurl=http://nginx.org/packages/centos/7/$basearch/
    enabled=1
    gpgcheck=0
    [root@nginx01 ~]# yum clean all
    [root@nginx01 ~]# yum makecache
    
  • 兩臺nginx主機安裝nginx並開啓四層轉發(僅展示nginx01的操作)

    [root@nginx01 ~]# yum -y install nginx	'//安裝nginx'
    [root@nginx01 ~]# vi /etc/nginx/nginx.conf 
    ...省略內容
     13  stream {
     14 
     15     log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
     16      access_log  /var/log/nginx/k8s-access.log  main;        ##指定日誌目錄
     17 
     18      upstream k8s-apiserver {
     19  #此處爲master的ip地址和端口 
     20          server 192.168.233.131:6443;	'//6443是apiserver的端口號'
     21  #此處爲master02的ip地址和端口
     22          server 192.168.233.130:6443;
     23      }
     24      server {
     25                  listen 6443;
     26                  proxy_pass k8s-apiserver;
     27      }
     28      }
    。。。省略內容
    
    

    mark

  • 啓動nginx服務

    [root@nginx01 ~]# nginx -t	'//檢查nginx語法'
    nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /etc/nginx/nginx.conf test is successful
    [root@nginx01 ~]# systemctl start nginx	'//開啓服務'
    [root@nginx01 ~]# systemctl status nginx
    [root@nginx01 ~]# netstat -ntap |grep nginx	'//會檢測出來6443端口'
    tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      1849/nginx: master  
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1849/nginx: master 
    
  • 兩臺nginx主機部署keepalived服務(僅展示nginx01的操作)

    [root@nginx01 ~]# yum -y install keepalived 
    [root@nginx01 ~]# vim /etc/keepalived/keepalived.conf 
    ! Configuration File for keepalived 
     
    global_defs { 
       # 接收郵件地址 
       notification_email { 
         [email protected] 
         [email protected] 
         [email protected] 
       } 
       # 郵件發送地址 
       notification_email_from [email protected]  
       smtp_server 127.0.0.1 
       smtp_connect_timeout 30 
       router_id NGINX_MASTER 
    } 
    
    vrrp_script check_nginx {
        script "/usr/local/nginx/sbin/check_nginx.sh"	'//keepalived服務檢查腳本的位置'
    }
    
    vrrp_instance VI_1 { 
        state MASTER 	'//nginx02設置爲BACKUP'
        interface ens33
        virtual_router_id 51 '//nginx02可設置爲52'
        priority 100    '//優先級,nginx02設置 90' 
        advert_int 1    '//指定VRRP 心跳包通告間隔時間,默認1秒 '
        authentication { 
            auth_type PASS      
            auth_pass 1111 
        }  
        virtual_ipaddress { 
            192.168.233.100/24 	'//VIP地址'
        } 
        track_script {
            check_nginx
        } 
    }
    
    
  • 創建監控腳本,啓動keepalived服務,查看VIP地址

    [root@nginx01 ~]# mkdir -p /usr/local/nginx/sbin/	'//創建監控腳本目錄'
    [root@nginx01 ~]# vim /usr/local/nginx/sbin/check_nginx.sh	'//編寫監控腳本配置文件'
    count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
    
    if [ "$count" -eq 0 ];then
        systemctl stop keepalived
    fi
    [root@nginx01 ~]# chmod +x /usr/local/nginx/sbin/check_nginx.sh	'//給權限'
    [root@nginx01 ~]# systemctl start keepalived	'//開啓服務'
    [root@nginx01 ~]# systemctl status keepalived
    [root@nginx01 ~]# ip a	'//兩個nginx服務器查看IP地址'
        VIP在nginx01上
    [root@nginx02 ~]# ip a
    
  • 驗證漂移地址

    [root@nginx01 ~]# pkill nginx	'//關閉nginx服務'
    [root@nginx01 ~]# systemctl status keepalived	'//發現keepalived服務關閉了'
    [root@nginx02 ~]# ip a	'//現在發現VIP地址跑到nginx02上了'
    
    
  • 恢復漂移地址的操作

    [root@nginx01 ~]# systemctl start nginx
    [root@nginx01 ~]# systemctl start keepalived	'//先開啓nginx,在啓動keepalived服務'
    [root@nginx01 ~]# ip a	'//再次查看,發現VIP回到了nginx01節點上'
    
    
  • 修改兩個node節點配置文件(ootstrap.kubeconfig 、),統一VIP地址,僅展示node01節點的操作

    [root@node01 ~]# vi /opt/k8s/cfg/bootstrap.kubeconfig 
        server: https://192.168.233.100:6443	'//此地址修改爲VIP地址'
    [root@node01 ~]# vi /opt/k8s/cfg/kubelet.kubeconfig 
        server: https://192.168.233.100:6443	'//此地址修改爲VIP地址'
    [root@node01 ~]# vi /opt/k8s/cfg/kube-proxy.kubeconfig 
        server: https://192.168.233.100:6443	'//此地址修改爲VIP地址'
    
  • 重啓兩個node節點的服務

    [root@node01 ~]# systemctl restart kubelet
    [root@node01 ~]# systemctl restart kube-proxy
    [root@node01 ~]# cd /opt/k8s/cfg/
    [root@node01 cfg]# grep 100 *	'//VIP修改成功'
    bootstrap.kubeconfig:    server: https://192.168.233.100:6443
    kubelet.kubeconfig:    server: https://192.168.233.100:6443
    kube-proxy.kubeconfig:    server: https://192.168.233.100:6443
    
    
  • 在nginx01上查看k8s日誌

    [root@nginx01 ~]# tail /var/log/nginx/k8s-access.log 	'//下面的日誌是重啓服務的時候產生的'
    192.168.233.132 192.168.233.131:6443 - [01/May/2020:01:25:59 +0800] 200 1121
    192.168.233.132 192.168.233.131:6443 - [01/May/2020:01:25:59 +0800] 200 1121
    
    
  • master節點測試創建pod

    [root@master ~]# kubectl run nginx --image=nginx	'//創建一個nginx測試pod'
    kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
    deployment.apps/nginx created
    [root@master ~]# kubectl get pods	'//查看狀態,是正在創建'
    NAME                    READY   STATUS              RESTARTS   AGE
    nginx-dbddb74b8-5s6h7   0/1     ContainerCreating   0          13s
    [root@master ~]# kubectl get pods	'//稍等一下再次查看,發現pod已經創建完成,在master02節點也可以查看'
    NAME                    READY   STATUS    RESTARTS   AGE
    nginx-dbddb74b8-5s6h7   1/1     Running   0          23s
    
    
  • 查看pod日誌

    [root@master ~]# kubectl logs nginx-dbddb74b8-5s6h7	'//查看pod日誌發現報錯原因是權限問題'
    Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-5s6h7)
    [root@master ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous	'//指定集羣中的匿名用戶有管理員權限'
    [root@master ~]# kubectl logs nginx-dbddb74b8-5s6h7	'//此時可以訪問,但是沒有日誌產生'
    
    
  • 訪問node節點的pod資源產生日誌,並在兩個master節點查看

    [root@master ~]# kubectl get pods -o wide	'//查看podIP網絡信息'
    NAME                    READY   STATUS    RESTARTS   AGE     IP            NODE              NOMINATED NODE
    nginx-dbddb74b8-5s6h7   1/1     Running   0          6m29s   172.17.26.2   192.168.233.132   <none>
    [root@node01 ~]# curl 172.17.26.2	'//在對應的節點訪問pod'
    [root@master ~]# kubectl logs nginx-dbddb74b8-5s6h7	'//再次在master節點訪問日誌情況,master02節點同樣可以訪問'
    172.17.26.1 - - [30/Apr/2020:17:38:48 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
    
    

謝謝賞閱!如有疑問可評論區交流!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章