kubernetes-----部署多master的二進制集羣

目錄

一.多master的二進制集羣分析

二.實驗環境分析

三.具體部署

搭建k8s的單節點集羣

搭建master2節點

部署負載均衡

修改node的VIP以及pod的創建

搭建k8s的Dashboard


一.多master的二進制集羣分析

  • 區別於單master的二進制集羣,多master集羣對master做了一個高可用,如果master1宕機,Load Balance就會將VIP轉移到master2,這樣就保證了master的可靠性。
  • 多節點的核心點就是需要指向一個核心的地址,我們之前在做單節點的時候已經將vip地址定義過寫入k8s-cert.sh腳本文件中(192.168.18.100),vip開啓apiserver,多master開啓端口接受node節點的apiserver請求,此時若有新的節點加入,不是直接找moster節點,而是直接找到vip進行spiserver的請求,然後vip再進行調度,分發到某一個master中進行執行,此時master收到請求之後就會給改node節點頒發證書
     

二.實驗環境分析

角色 IP地址 系統與資源 相關組件
master1 192.168.43.101/24 centos7.4(2C 2G) kube-apiserver kube-controller-manager kube-scheduler etcd
master2 192.168.43.104/24 centos7.4(2C 2G) kube-apiserver kube-controller-manager kube-scheduler
node1 192.168.43.102/24 centos7.4(2C 2G) kubelet kube-proxy docker flannel etcd
node2 192.168.43.103/24 centos7.4(2C 2G) kubelet kube-proxy docker flannel etcd
nginx_lbm 192.168.43.105/24 centos7.4(2C 2G) nginx keepalived
nginx_lbb 192.168.43.106/24 centos7.4(2C 2G) nginx keepalived
VIP 192.168.43.100/24 - -
  • 本實驗基於單master基礎之上操作,添加一個master2
  • 利用nginx做負載均衡,利用keepalived做負載均衡器的高可用

注:1.9版本之後nginx具有了四層轉發的功能(負載均衡),多了stream模塊

  • 利用keepalived給master提供的虛擬IP地址,給node訪問

三.具體部署

搭建k8s的單節點集羣

搭建master2節點

master1的操作

  • 複製相關文件、腳本
##遞歸複製/opt/kubernetes和/opt/etcd下的所有文件到master中
[root@master ~]# scp -r /opt/kubernetes/ [email protected]:/opt/
The authenticity of host '192.168.43.104 (192.168.43.104)' can't be established.
ECDSA key fingerprint is SHA256:AJdR3BBN9kCSEk3AVfaZuyrxhNMoDnzGMOMWlP1gUaQ.
ECDSA key fingerprint is MD5:d4:ab:7b:82:c3:99:b8:5d:61:f2:dc:af:06:38:e7:6c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.43.104' (ECDSA) to the list of known hosts.
[email protected]'s password: 
token.csv                                                 100%   84     5.2KB/s   00:00    
kube-apiserver                                            100%  934   353.2KB/s   00:00    
kube-scheduler                                            100%   94    41.2KB/s   00:00    
kube-controller-manager                                   100%  483   231.5KB/s   00:00    
kube-apiserver                                            100%  184MB  19.4MB/s   00:09    
kubectl                                                   100%   55MB  24.4MB/s   00:02    
kube-controller-manager                                   100%  155MB  26.7MB/s   00:05    
kube-scheduler                                            100%   55MB  31.1MB/s   00:01    
ca-key.pem                                                100% 1679   126.0KB/s   00:00    
ca.pem                                                    100% 1359   514.8KB/s   00:00    
server-key.pem                                            100% 1675   501.4KB/s   00:00    
server.pem                                                100% 1643   649.4KB/s   00:00    

##master2中需要etcd的證書,否則apiserver無法啓動
[root@master ~]# scp -r /opt/etcd/ [email protected]:/opt/
[email protected]'s password: 
etcd                                                      100%  516    64.2KB/s   00:00    
etcd                                                      100%   18MB  25.7MB/s   00:00    
etcdctl                                                   100%   15MB  25.9MB/s   00:00    
ca-key.pem                                                100% 1675   118.8KB/s   00:00    
ca.pem                                                    100% 1265   603.2KB/s   00:00    
server-key.pem                                            100% 1675   675.3KB/s   00:00    
server.pem                                                100% 1338   251.5KB/s   00:00    
[root@master ~]# 

##複製執行腳本到master中
[root@master ~]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service [email protected]:/usr/lib/systemd/system/
[email protected]'s password: 
kube-apiserver.service                                    100%  282    30.3KB/s   00:00    
kube-controller-manager.service                           100%  317    45.9KB/s   00:00    
kube-scheduler.service                                    100%  281   151.7KB/s   00:00    

master2的操作

  • 基本環境設置
##修改主機名
[root@localhost ~]# hostnamectl set-hostname master2
[root@localhost ~]# su

##永久關閉安全性功能
[root@master2 ~]# systemctl stop firewalld
[root@master2 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master2 ~]# setenforce 0
[root@master2 ~]# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 
  • 修改kube-apiserver中的IP地址
[root@master2 ~]# cd /opt/kubernetes/cfg/
[root@master2 cfg]# ls
kube-apiserver  kube-controller-manager  kube-scheduler  token.csv
[root@master2 cfg]# vi kube-apiserver 


KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.43.101:2379,https://192.168.43.102:2379,https://192.168.43.103:2379 \
##修改bind地址,綁定本地地址
--bind-address=192.168.43.104 \
--secure-port=6443 \
##修改對外展示的地址
--advertise-address=192.168.43.104 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

  • 開啓服務,並且驗證
##開啓apieserver服務
[root@master2 cfg]# systemctl start kube-apiserver.service 
[root@master2 cfg]# systemctl enable kube-apiserver.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

##開啓控制管理器服務
[root@master2 cfg]# systemctl start kube-controller-manager.service 
[root@master2 cfg]# systemctl enable kube-controller-manager.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

##開啓調度器服務
[root@master2 cfg]# systemctl start kube-scheduler.service 
[root@master2 cfg]# systemctl enable kube-scheduler.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

##將執行腳本添加入全局變量
[root@master2 cfg]# echo "export PATH=$PATH:/opt/kubernetes/bin/" >> /etc/profile
[root@master2 cfg]# source /etc/profile

##查看集羣節點,說明master2添加成功
[root@master2 cfg]# kubectl get node
NAME             STATUS   ROLES    AGE   VERSION
192.168.43.102   Ready    <none>   26h   v1.12.3
192.168.43.103   Ready    <none>   26h   v1.12.3
[root@master2 cfg]# 

注:能夠添加master節點的前提是在部署單節點時,在server-csr.json中指定添加的地址,要不然生成不了證書,也就添加不了

部署負載均衡

以下操作在nginx_lbm和nginx_lbb中都操作,並且以nginx_lbm爲例

  • 編輯keepalived的配置文件的模板
##keepalibed.conf到nginx_lbm和nginx_lbb
[root@nginx_lbm ~]# ls
anaconda-ks.cfg  initial-setup-ks.cfg  keepalived.conf  公共  模板  視頻  圖片  文檔  下載  音樂  桌面
[root@nginx_lbm ~]# cat keepalived.conf 
! Configuration File for keepalived 
 
global_defs { 
   # 接收郵件地址 
   notification_email { 
     [email protected] 
     [email protected] 
     [email protected] 
   } 
   # 郵件發送地址 
   notification_email_from [email protected]  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER 
} 

vrrp_script check_nginx {
    script "/usr/local/nginx/sbin/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER 
    interface eth0
    virtual_router_id 51 # VRRP 路由 ID實例,每個實例是唯一的 
    priority 100    # 優先級,備服務器設置 90 
    advert_int 1    # 指定VRRP 心跳包通告間隔時間,默認1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        10.0.0.188/24 
    } 
    track_script {
        check_nginx
    } 
}

[root@nginx_lbm ~]# 

  • 關閉安全性功能
systemctl stop firewalld.service
setenforce 0
  • 編輯nginx的源,並且安裝nginx
[root@nginx_lbm ~]# cat /etc/yum.repos.d/nginx.repo 
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0

##重新加載yum 倉庫
[root@nginx_lbm ~]# yum list
##安裝nginx
[root@nginx_lbm ~]# yum install nginx -y
  • 編輯nginx配置文件,添加負載均衡功能,並且啓動服務
##配置負載均衡功能
[root@nginx_lbm ~]# vi /etc/nginx/nginx.conf 
##在第12行以下插入以下內容
     12 stream {
     13 
     14    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
     15     access_log  /var/log/nginx/k8s-access.log  main;
     16 
     17     upstream k8s-apiserver {
     18 #此處爲master1的ip地址
     19         server 192.168.43.101:6443;
     20 #此處爲master2的ip地址
     21         server 192.168.43.102:6443;
     22     }
     23     server {
     24                 listen 6443;
     25                 proxy_pass k8s-apiserver;
     26     }
     27     }

##檢查配置文件
[root@nginx_lbm ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

##編輯主頁面,以區別master和backup
[root@nginx_lbm ~]# cd /usr/share/nginx/html/
[root@nginx_lbm html]# ls
50x.html  index.html
[root@nginx_lbm html]# vi index.html
<h1>master</h1>或者<h1>backup</h1>

##開啓nginx服務
[root@nginx_lbm html]# systemctl start nginx

##安裝keepalived服務
[root@nginx_lbm html]# yum install keeepalived -y
##覆蓋keepalived的配置文件
[root@nginx_lbm ~]# cp keepalived.conf /etc/keepalived/keepalived.conf 
cp:是否覆蓋"/etc/keepalived/keepalived.conf"? yes

nginx_lbm的操作

  • 配置keepalived服務
[root@nginx_lbm ~]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived 
 
global_defs { 
   # 接收郵件地址 
   notification_email { 
     [email protected] 
     [email protected] 
     [email protected] 
   } 
   # 郵件發送地址 
   notification_email_from [email protected]  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER 
} 
##指定keepalived服務關閉腳本
vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER         ##在nginx_lbm,設置爲master
    interface ens33        ##指定網卡名
    virtual_router_id 51     ##24行,vrrp路由ID實例,每個實例是唯一的
    priority 100        ##在master中優先級爲100,backup優先級爲90
    advert_int 1    
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.43.100/24         ##指定VIP
    } 
    track_script {
        check_nginx
    } 
}

[root@nginx_lbm ~]# vi /etc/nginx/check_nginx.sh  ##這個keepalived服務關閉腳本需要自行創建
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")	
#統計數量

if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
#匹配爲0,關閉keepalived服務

[root@nginx_lbm ~]# chmod +x /etc/nginx/check_nginx.sh        ##添加權限

##啓動keepalived服務
[root@nginx_lbm ~]# systemctl start keepalived.service


  • 查看vip
[root@nginx_lbm ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:92:43:7a brd ff:ff:ff:ff:ff:ff
    inet 192.168.43.105/24 brd 192.168.43.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.43.100/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::ba5a:8436:895c:4285/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:72:80:f5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:72:80:f5 brd ff:ff:ff:ff:ff:ff

nginx_lbb的操作

  • 配置keepalived服務
##編輯keepalived的配置文件
[root@nginx_lbb ~]# vi /etc/keepalived/keepalived.conf 
! Configuration File for keepalived 
 
global_defs { 
   # 接收郵件地址 
   notification_email { 
     [email protected] 
     [email protected] 
     [email protected] 
   } 
   # 郵件發送地址 
   notification_email_from [email protected]  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER 
} 

vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state BACKUP        ##不同於nginx_lbm,此處的state爲BACKUP
    interface ens33
    virtual_router_id 51        
    priority 90         ##優先級爲90,低於nginx_lbm
    advert_int 1    
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.43.100/24 
    } 
    track_script {
        check_nginx
    } 
}

[root@nginx_lbb ~]# vi /etc/nginx/check_nginx.sh 
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")	

if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi


[root@nginx_lbb ~]# chmod +x /etc/nginx/check_nginx.sh 


##開啓服務
[root@nginx_lbb ~]# systemctl start keepalived.service 
  • 查看vip

驗證負載均衡器的高可用

  • 關閉nginx_lbm中的nginx服務
##主動關閉nginx服務
[root@nginx_lbm ~]# pkill nginx

##查看nginx和keepalived狀態
[root@nginx_lbm ~]# systemctl status nginx
● nginx.service - nginx - high performance web server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since 三 2020-04-29 08:40:27 CST; 6s ago
     Docs: http://nginx.org/en/docs/
  Process: 4085 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, status=1/FAILURE)
 Main PID: 1939 (code=exited, status=0/SUCCESS)


##keepalived服務也被自動關閉
[root@nginx_lbm ~]# systemctl status keepalived.service 
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

4月 29 08:35:44 nginx_lbm Keepalived_vrrp[2204]: VRRP_Instance(VI_1) Send...
4月 29 08:35:44 nginx_lbm Keepalived_vrrp[2204]: Sending gratuitous ARP o...
4月 29 08:35:44 nginx_lbm Keepalived_vrrp[2204]: Sending gratuitous ARP o...
4月 29 08:35:44 nginx_lbm Keepalived_vrrp[2204]: Sending gratuitous ARP o...
4月 29 08:35:44 nginx_lbm Keepalived_vrrp[2204]: Sending gratuitous ARP o...
4月 29 08:40:27 nginx_lbm Keepalived[2202]: Stopping
4月 29 08:40:27 nginx_lbm systemd[1]: Stopping LVS and VRRP High Availab....
4月 29 08:40:27 nginx_lbm Keepalived_vrrp[2204]: VRRP_Instance(VI_1) sent...
4月 29 08:40:27 nginx_lbm Keepalived_vrrp[2204]: VRRP_Instance(VI_1) remo...
4月 29 08:40:28 nginx_lbm systemd[1]: Stopped LVS and VRRP High Availabi....
Hint: Some lines were ellipsized, use -l to show in full.


##查看地址發現,沒有vip
[root@nginx_lbm ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:92:43:7a brd ff:ff:ff:ff:ff:ff
    inet 192.168.43.105/24 brd 192.168.43.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::ba5a:8436:895c:4285/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:72:80:f5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:72:80:f5 brd ff:ff:ff:ff:ff:ff
  • 查看nginx_lbm和nginx_lbb的vip

上述界面的出現說明,雙機熱備成功

  • 恢復vip
##在nginx_lbm中啓動nginx和keepalived服務
[root@nginx_lbm ~]# systemctl start nginx
[root@nginx_lbm ~]# systemctl start keepalived

##再次查看地址信息,發現vip回到了nginx_lbm
[root@nginx_lbm ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:92:43:7a brd ff:ff:ff:ff:ff:ff
    inet 192.168.43.105/24 brd 192.168.43.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.43.100/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::ba5a:8436:895c:4285/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:72:80:f5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:72:80:f5 brd ff:ff:ff:ff:ff:ff

修改node的VIP以及pod的創建

修改node1和node2配置文件

  • 以node2爲例
[root@node2 ~]# cd /opt/kubernetes/cfg/
[root@node2 cfg]# ls
bootstrap.kubeconfig  kubelet.config      kube-proxy.kubeconfig
flanneld              kubelet.kubeconfig
kubelet               kube-proxy
[root@node2 cfg]# vi bootstrap.kubeconfig 
server: https://192.168.142.20:6443		
#改爲Vip的地址
[root@node2 cfg]# vi kubelet.kubeconfig 
server: https://192.168.142.20:6443		
#改爲Vip的地址
[root@node2 cfg]# vi kube-proxy.kubeconfig 
server: https://192.168.142.20:6443		
#改爲Vip的地址
[root@node2 cfg]# grep 100 *
bootstrap.kubeconfig:    server: https://192.168.43.100:6443
kubelet.kubeconfig:    server: https://192.168.43.100:6443
kube-proxy.kubeconfig:    server: https://192.168.43.100:6443


##重啓服務
[root@node2 cfg]# systemctl restart kubelet.service 
[root@node2 cfg]# systemctl restart kube-proxy.service 
  • 在nginx_lbm上查看nginx的日誌,看是否有node訪問
[root@nginx_lbm ~]# cd /var/log/nginx/
[root@nginx_lbm nginx]# ls
access.log  error.log  k8s-access.log
[root@nginx_lbm nginx]# tail -f k8s-access.log 
192.168.43.102 192.168.43.101:6443 - [29/Apr/2020:08:49:41 +0800] 200 1119
192.168.43.102 192.168.43.102:6443, 192.168.43.101:6443 - [29/Apr/2020:08:49:41 +0800] 200 0, 1119
192.168.43.103 192.168.43.102:6443, 192.168.43.101:6443 - [29/Apr/2020:08:50:08 +0800] 200 0, 1120
192.168.43.103 192.168.43.101:6443 - [29/Apr/2020:08:50:08 +0800] 200 1121

在master上創建pod並且測試

  • 創建pod
[root@master ~]# kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created
  • 查看pod狀態
[root@master ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-8qt6q   1/1     Running   0          24m
  • 綁定羣集中的匿名用戶賦予管理員權限(解決日誌不可看問題)
[root@master ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
  • 查看pod日誌
##在對應的node1上訪問,因爲這個pod創建在node1上
[root@node1 ~]# curl 172.17.36.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@node1 ~]# 



##在master上查看日誌
[root@master ~]# kubectl logs nginx-dbddb74b8-8qt6q
172.17.36.1 - - [29/Apr/2020:13:37:24 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
[root@master ~]# 

  • 查看Pod網絡
[root@master ~]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE
nginx-dbddb74b8-8qt6q   1/1     Running   0          30m   172.17.36.2   192.168.43.102   <none>

搭建k8s的Dashboard

master1上的操作

  • 上傳yaml文件
##創建dashboard目錄
[root@master ~]# mkdir dashboard
[root@master ~]# cd dashboard/
##創建文件
[root@master dashboard]# ls
dashboard-configmap.yaml   dashboard-rbac.yaml    dashboard-service.yaml
dashboard-controller.yaml  dashboard-secret.yaml  k8s-admin.yaml
  • 加載、創建所需文件,注意順序如下:
##在/root/dashboard目錄下操作

#授權訪問api
kubectl create -f dashboard-rbac.yaml

#進行加密
kubectl create -f dashboard-secret.yaml

#配置應用
kubectl create -f dashboard-configmap.yaml

#控制器
kubectl create -f dashboard-controller.yaml

#發佈出去進行訪問
kubectl create -f dashboard-service.yaml
  • 查看創建在指定的kube-system命名空間下的pod
[root@master dashboard]# kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-65f974f565-bwmlx   1/1     Running   0          47s

##查看如何訪問dashboard
[root@master dashboard]# kubectl get pods,svc -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
pod/kubernetes-dashboard-65f974f565-bwmlx   1/1     Running   0          82s

NAME                           TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/kubernetes-dashboard   NodePort   10.0.0.199   <none>        443:30001/TCP   70s

由上可知,訪問dashboard可以訪問:

https://node的IP地址:30001/

比如:https://192.168.43.102:30001/

但是訪問dashboard需要令牌,所以下面還需要生成令牌

  • 生成自簽證書
##製作證書腳本
[root@master dashboard]# vi dashboard-cert.sh
{
   "CN": "Dashboard",
   "hosts": [],
   "key": {
       "algo": "rsa",
       "size": 2048
   },
   "names": [
       {
           "C": "CN",
           "L": "BeiJing",
           "ST": "BeiJing"
       }
   ]
}
EOF

K8S_CA=$1
cfssl gencert -ca=$K8S_CA/ca.pem -ca-key=$K8S_CA/ca-key.pem -config=$K8S_CA/ca-config.json -profile=kubernetes dashboard-csr.json | cfssljson -bare dashboard
kubectl delete secret kubernetes-dashboard-certs -n kube-system
kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system


##執行腳本
[root@master dashboard]# bash dashboard-cert.sh /root/k8s/k8s-cert/
[root@master dashboard]# ls
dashboard-cert.sh          dashboard.csr       dashboard.pem          dashboard-service.yaml
dashboard-configmap.yaml   dashboard-csr.json  dashboard-rbac.yaml    k8s-admin.yaml
dashboard-controller.yaml  dashboard-key.pem   dashboard-secret.yaml



##在控制器yaml文件中添加證書,注意yaml文件的格式,使用空格
[root@master dashboard]# vi dashboard-controller.yaml 
#在47行下追加以下內容
          - --tls-key-file=dashboard-key.pem
          - --tls-cert-file=dashboard.pem


##重新部署控制器
[root@master dashboard]# kubectl apply -f dashboard-controller.yaml 
  • 生成登錄token
##生成令牌
[root@master dashboard]# kubectl create -f k8s-admin.yaml 

##將令牌保存
[root@master dashboard]# kubectl get secret -n kube-system
NAME                               TYPE                                  DATA   AGE
dashboard-admin-token-4zpgd        kubernetes.io/service-account-token   3      66s
default-token-pdn6p                kubernetes.io/service-account-token   3      39h
kubernetes-dashboard-certs         Opaque                                11     11m
kubernetes-dashboard-key-holder    Opaque                                2      15m
kubernetes-dashboard-token-4whmf   kubernetes.io/service-account-token   3      15m

##查看token,並且複製
[root@master dashboard]# kubectl describe secret dashboard-admin-token-4zpgd -n kube-system
Name:         dashboard-admin-token-4zpgd
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 36095d9f-89bd-11ea-bb1a-000c29ce5f24

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1359 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNHpwZ2QiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzYwOTVkOWYtODliZC0xMWVhLWJiMWEtMDAwYzI5Y2U1ZjI0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.wRx71hNjdAOuaG8EPEr_yWaAmw_CF-aXwVFk7XeXwW2bzDLRh0RfQV-7nyBbw-wcPVXLbpoWNSYuHFS0vXHWGezk9ssERnErDXjE164H0lR8LkD1NekUQqB8L9jqW9oAZrZ0CkAxUIuijG14BjbAIV5wXmT1aKsK2sZTC0u-IjDcIT2UhjU3LvSL0Fzi4zyEvfl5Yf0Upx6dZ7yNpUd13ziNIP4KJ5DjWesIK-34IG106Kf6y1ehmRdW1Sg0HNvopXhFJPAhp-BkEz_SCmsf89_RDNVBTBSRWCgZdQC78B2VshbJqMRZOIV2IprBFhYKK6AeOY6exCyk1HWQRKFMRw
[root@master dashboard]# 

登錄dashboard

 

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章