目錄
2.4、Replication Controllers. 2
2.14.2、Replication controller. 5
4.5.1、配置chrony服務端(master)... 11
4.5.2、配置chrony客戶端(兩個node節點都做)... 14
5.2.1、修改配置文件/etc/etcd/etcd.conf 16
5.3.5、配置kube-controller-manager配置文件... 20
5.10、啓動master上bube-apiserver服務... 29
5.11、啓動master上kube-controller-manager 服務... 30
5.12、啓動master上kube-scheduler服務... 30
5.4.3、配置node1上的kube-proxy服務... 33
7.3.2、上傳mysql-deployment.yaml到master管理結點... 69
7.3.3、在node1和node2上導入mysql鏡像... 69
7.3.4、Mysql-deployment.yaml文件內容... 70
7.3.13、在node2上查看運行mysql docker實例... 75
8.1.7、replicationController. 99
8.1.8、service和replicationController總結... 99
1、k8s簡介
Kubernetes 是Google開源的容器集羣管理系統,基於Docker構建一個容器的調度服務,提供資源調度、均衡容災、服務註冊、勱態擴縮容等功能套件。 基於容器的雲平臺Kubernetes基於docker容器的雲平臺,簡寫成: k8s 。 openstack基於kvm虛擬機雲平臺。
官網:https://kubernetes.io/
2、基礎架構圖
2.1、master
kubernetes管理結點
2.2、apiserver
提供接口服務,用戶通過apiserver來管理整個容器集羣平臺。API Server 負責和 etcd 交互(其他組件不會直接操作 etcd,只有 API Server 這麼做),整個 kubernetes 集羣的所有的交互都是以 API Server 爲核心的。如:
2.2.1、所有對集羣迚行的查詢和管理都要通過 API 來進行
2.2.2、所有模塊之間並不會互相調用
而是通過和 API Server 打交道來完成自己那部分的工作 、API Server 提供的驗證和授權保證了整個集羣的安全
2.3、scheduler kubernetes
調度服務
2.4、Replication Controllers
複製, 保證pod的高可用
Replication Controller是Kubernetes系統中最有用的功能,實現複製多個Pod副本,往往一個應用需要多個Pod來支撐,並且可以保證其複製的副本數,即使副本所調度分配的宿主機出現異常,通過Replication Controller可以保證在其它宿主機使用同等數量的Pod。Replication Controller可以通過repcon模板來創建多個Pod副本,同樣也可以直接複製已存在Pod,需要通過Label selector來關聯。
2.5、minion
真正運行容器container的物理機。 kubernets中需要很多minion機器,來提供運算。
minion [ˈmɪniən] 爪牙
2.6、container
容器 ,可以運行服務和程序
2.7、 Pod
在Kubernetes系統中,調度的最小顆粒不是單純的容器,而是抽象成一個Pod,Pod是一個可以被創建、銷燬、調度、管理的最小的部署單元。pod中可以包括一個或一組容器。
pod [pɒd] 豆莢
2.8、Kube_proxy
代理做端口轉發,相當於LVS-NAT模式中的負載調度器器
Proxy解決了同一宿主機,相同服務端口衝突的問題,還提供了對外服務的能力,Proxy後端使用了隨機、輪循負載均衡算法。
2.9、Etcd
etcd存儲kubernetes的配置信息,可以理解爲是k8s的數據庫,存儲着k8s容器雲平臺中所有節點、pods、網絡等信息。linux 系統中/etc 目錄作用是存儲配置文件。 所以etcd (daemon) 是一個存儲配置文件的後臺服務。
2.10、Services
Services是Kubernetes最外圍的單元,通過虛擬一個訪問IP及服務端口,可以訪問我們定義好的Pod資源,目前的版本是通過iptables的nat轉發來實現,轉發的目標端口爲Kube_proxy生成的隨機端口。
2.11、Labels 標籤
Labels是用於區分Pod、Service、Replication Controller的key/value鍵值對,僅使用在Pod、Service、 Replication Controller之間的關係識別,但對這些單元本身進行操作時得使用name標籤。
2.12、 Deployment
Deployment [dɪ'plɔ.mənt] 部署
Kubernetes Deployment用於更新Pod和Replica Set(下一代的Replication Controller)的方法,你可以在Deployment對象中只描述你所期望的理想狀態(預期的運行狀態),Deployment控制器會將現在的實際狀態轉換成期望的狀態。例如,將所有的webapp:v1.0.9升級成webapp:v1.1.0,只需創建一個Deployment,Kubernetes會按照Deployment自動進行升級。通過Deployment可以用來創建新的資源。 Deployment可以幫我們實現無人值守的上線,大大降低我們的上線過程的複雜溝通、操作風險。
2.13、Kubelet命令
Kubelet和Kube-proxy都運行在minion節點上。
Kube-proxy 實現Kubernetes網絡相關內容。
Kubelet命令管理Pod、Pod中容器及容器的鏡像和卷等信息。
2.14、總結
2.14.1、Kubernetes的架構
由一個master和多個minion組成,master通過api提供服務,接受kubectl的請求來調度管理整個集羣。 kubectl: 是k8s平臺的一個管理命令。
2.14.2、Replication controller
定義了多個pod或者容器需要運行,如果當前集羣中運行的pod或容器達不到配置的數量,replication controller會調度容器在多個minion上運行,保證集羣中的pod數量。
2.14.3、service
則定義真實對外提供的服務,一個service會對應後端運行的多個container。
2.14.4、Kubernetes
是個管理平臺,minion上的kube-proxy 擁有提供真實服務公網IP。客戶端訪問kubernetes中提供的服務,是直接訪問到kube-proxy上的。
2.14.5、pod
在Kubernetes中pod是一個基本單元,一個pod可以是提供相同功能的多個container,這些容器會被部署在同一個minion上。minion是運行Kubelet中容器的物理機。minion接受master的指令創建pod或者容器。
3、環境規劃
節點角色 |
IP地址 |
Cpu |
內存 |
OS |
master |
192.168.2.178 |
2core |
3G |
Centos7.4 |
etcd |
192.168.2.178 |
2core |
3G |
Centos7.4 |
Node1 |
192.168.2.179 |
2core |
3G |
Centos7.4 |
Node2 |
192.168.2.180 |
2core |
3G |
Centos7.4 |
注:正常需要4臺機器,如果你內存不夠,master和etcd可以運行在同一臺機器上。
4、環境準備
4.1、配置k8s的yum源(master)
我們使用的docker-1.12的版本。
把k8s-package.tar.gz上傳到master節點上。
[root@master soft]# ll k8s-package.tar.gz
-rw-r--r-- 1 root root 186724113 Dec 18 19:18 k8s-package.tar.gz
[root@master soft]#
[root@master soft]# tar -xzvf k8s-package.tar.gz
[root@master soft]# ls
CentOS-7-x86_64-DVD-1708.iso k8s-package k8s-package.tar.gz
[root@master soft]# cp -R /soft/k8s-package /var/www/html/
[root@master html]# cd /var/www/html/
[root@master html]# ls
centos7.4 k8s-package
新建repo文件配置yum源
[root@master yum.repos.d]# cat /etc/yum.repos.d/k8s-package.repo
[k8s-package]
name=k8s-package
baseurl=http://192.168.2.178/k8s-package
enabled=1
gpgcheck=0
[root@master yum.repos.d]# cat local.repo
[centos7-Server]
name=centos7-Server
baseurl=http://192.168.2.178/centos7.4
enabled=1
gpgcheck=0
[root@master yum.repos.d]#
安裝httpd服務(自行配置本地yum源)
yum -y install httpd
啓動httpd服務
systemctl start httpd
systemctl enable httpd
創建httpd掛載目錄
mkdir /var/www/html/centos7.4
掛載鏡像到httpd
[root@master soft]# mount -o loop CentOS-7-x86_64-DVD-1708.iso /var/www/html/centos7.4/
mount: /soft/CentOS-7-x86_64-DVD-1708.iso is already mounted
清除yum源
[root@master html]# yum clean all
Loaded plugins: fastestmirror
Cleaning repos: centos7-Server k8s-package
Cleaning up everything
Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
Cleaning up list of fastest mirrors
[root@master html]# yum makecache
Loaded plugins: fastestmirror
centos7-Server | 3.6 kB 00:00:00
k8s-package | 2.9 kB 00:00:00
(1/7): centos7-Server/group_gz | 156 kB 00:00:00
(2/7): centos7-Server/primary_db | 3.1 MB 00:00:00
(3/7): centos7-Server/filelists_db | 3.1 MB 00:00:00
(4/7): centos7-Server/other_db | 1.2 MB 00:00:00
(5/7): k8s-package/filelists_db | 14 kB 00:00:00
(6/7): k8s-package/other_db | 17 kB 00:00:00
(7/7): k8s-package/primary_db | 32 kB 00:00:00
Determining fastest mirrors
Metadata Cache Created
[root@master html]#
4.2、配置yum源(node1和node2)
從master上拷貝repo文件到另外兩個節點
[root@master html]# scp /etc/yum.repos.d/local.repo /etc/yum.repos.d/k8s-package.repo 192.168.2.179:/etc/yum.repos.d/
[email protected]'s password:
local.repo 100% 98 39.7KB/s 00:00
k8s-package.repo 100% 93 26.1KB/s 00:00
[root@master html]# scp /etc/yum.repos.d/local.repo /etc/yum.repos.d/k8s-package.repo 192.168.2.180:/etc/yum.repos.d/
[email protected]'s password:
local.repo 100% 98 15.9KB/s 00:00
k8s-package.repo 100% 93 36.0KB/s 00:00
[root@master html]#
在另外兩個節點上生成yum緩存
yum clean all
yum makecache
4.3、主機名規劃
配置主機名
IP |
主機名 |
192.168.2.178 |
master |
192.168.2.179 |
node1 |
192.168.2.180 |
node2 |
[root@master html]# cat /etc/hostname
Master
[root@node1 yum.repos.d]# cat /etc/hostname
node1
[root@node2 ~]# cat /etc/hostname
node2
4.4、配置/etc/hosts
配置/etc/hosts
三臺機器都加入下面的紅色部分
[root@master html]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.2.178 master etcd
192.168.2.179 node1
192.168.2.180 node2
4.5、配置時間同步
4.5.1、配置chrony服務端(master)
我們以master爲時間同步服務器,另外兩臺同步master的時間
配置master
檢查chrony包
[root@master html]# rpm -qa |grep chrony
[root@master html]#
安裝chrony包
[root@master html]# yum -y install chrony
編輯chrony配置文件
[root@master html]# vi /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift
# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3
# Enable kernel synchronization of the real-time clock (RTC).
rtcsync
# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *
# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2
# Allow NTP client access from local network.
#allow 192.168.0.0/16
allow 192.168.0.0/16
# Serve time even if not synchronized to a time source.
#local stratum 10
local stratum 8
# Specify file containing keys for NTP authentication.
#keyfile /etc/chrony.keys
# Specify directory for log files.
logdir /var/log/chrony
# Select which information is logged.
#log measurements statistics tracking
飄紅的部分是修改/增加的內容。
重啓chrony服務
[root@master html]# systemctl restart chronyd
[root@master html]# systemctl enable chronyd
檢查123端口監聽
[root@master html]# netstat -an |grep 123
udp 0 0 0.0.0.0:123 0.0.0.0:*
unix 2 [ ACC ] STREAM LISTENING 23123 private/verify
4.5.2、配置chrony客戶端(兩個node節點都做)
檢查chrony安裝
[root@node1 yum.repos.d]# rpm -qa|grep chrony
[root@node1 yum.repos.d]#
安裝chrony
[root@node1 yum.repos.d]# yum -y install chrony
編輯chrony配置文件
[root@node1 yum.repos.d]# vi /etc/chrony.conf
註釋掉如下內容:
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
添加下面一行,master的ip地址
server 192.168.2.178 iburst
重啓chrony服務
[root@node1 yum.repos.d]# systemctl restart chronyd
[root@node1 yum.repos.d]# systemctl enable chronyd
檢查同步
[root@node1 yum.repos.d]# timedatectl
Local time: Sat 2019-12-21 12:19:49 CST
Universal time: Sat 2019-12-21 04:19:49 UTC
RTC time: Sat 2019-12-21 04:19:49
Time zone: Asia/Shanghai (CST, +0800)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: n/a
[root@node1 yum.repos.d]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* master 8 6 37 8 +562ns[+1331us] +/- 589us
[root@node1 yum.repos.d]#
4.6、關閉防火牆
所有節點都需要執行
[root@master html]# systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld
4.7、關閉selinux
所有節點都需要關閉selinux
[root@master html]# cat /etc/selinux/config
修改爲:
SELINUX=disabled
5、安裝k8s
5.1、安裝包
5.1.1、master節點安裝包
在master節點上安裝如下包
[root@master html]# yum install -y kubernetes etcd flannel
注:Flannel爲Docker提供一種可配置的虛擬重疊網絡。實現跨物理機的容器之間能直接訪問
1、Flannel在每一臺主機上運行一個 agent。
2、flanneld,負責在提前配置好的地址空間中分配子網租約。Flannel使用etcd來存儲網絡配置。
3、chrony:主要用於同步容器雲平臺中所有結點的時間。雲平臺中結點的時間需要保持一致。
4、kubernetes 中包括了服務端和客戶端相關的軟件包
5、etcd是etcd服務的軟件包
5.1.2、node節點安裝包
兩個node節點安裝如下包
[root@node1 yum.repos.d]# yum install kubernetes flannel -y
#每個minion都要安裝一樣
5.2、配置etcd服務器
5.2.1、修改配置文件/etc/etcd/etcd.conf
按照規劃,etcd服務運行在master節點上
vi /etc/etcd/etcd.conf
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://192.168.2.178:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.2.178:2379"
5.2.2、配置文件說明
第一行紅色的是加的,第二行紅色的是改的,需要指定成自己的master的ip。
/etc/etcd/etcd.conf配置文件含意如下:
ETCD_NAME="default"
etcd節點名稱,如果etcd集羣只有一臺etcd,這一項可以註釋不用配置,默認名稱爲default,這個名字後面會用到。這裏我們就用默認的。
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
etcd存儲數據的目錄,這裏我們也是使用默認的。
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://192.168.1.63:2379"
etcd對外服務監聽地址,一般默認端口就是2379端口,我們不改,如果爲0.0.0.0將會監聽所有接口
ETCD_ARGS=""
需要額外添加的參數,可以自己添加,etcd的所有參數可以通過etcd -h查看。
5.2.3、啓動etcd服務
[root@master html]# systemctl start etcd
[root@master html]# systemctl enable etcd
5.2.4、檢查etcd通訊端口
[root@master html]# netstat -antup | grep 2379
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 982/etcd
tcp 0 0 192.168.2.178:2379 0.0.0.0:* LISTEN 982/etcd
tcp 0 0 192.168.2.178:60368 192.168.2.178:2379 ESTABLISHED 982/etcd
tcp 0 0 127.0.0.1:37860 127.0.0.1:2379 ESTABLISHED 982/etcd
tcp 0 0 127.0.0.1:2379 127.0.0.1:37860 ESTABLISHED 982/etcd
tcp 0 0 192.168.2.178:2379 192.168.2.178:60368 ESTABLISHED 982/etcd
5.2.5、檢查etcd集羣成員表
[root@master html]# etcdctl member list
8e9e05c52164694d: name=default peerURLs=http://localhost:2380 clientURLs=http://192.168.2.178:2379 isLeader=true
5.2.6、檢查etcd集羣健康狀態
[root@master html]# etcdctl cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://192.168.2.178:2379
cluster is healthy
到此etcd節點成功。
5.3、配置master服務
5.3.1、配置k8s配置文件
[root@master html]# vi /etc/kubernetes/config
KUBE_MASTER="--master=http://192.168.2.178:8080"
飄紅的是改的。
指定master在192.168.2.178 IP上監聽端口8080
5.3.2、config配置文件說明
注:/etc/kubernetes/config配置文件含意:
KUBE_LOGTOSTDERR="--logtostderr=true" #表示錯誤日誌記錄到文件還是輸出到stderr標準錯誤輸出。
KUBE_LOG_LEVEL="--v=0" #日誌等級。
KUBE_ALLOW_PRIV="--allow_privileged=false" #是否允講運行特權容器。false表示丌允講特權容器
5.3.3、修改apiserver配置文件
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.2.178:2379"
上面飄紅的是改的。
下面這行註釋掉:
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
新增爲如下內容:
KUBE_ADMISSION_CONTROL="--admission-control=AlwaysAdmit"
5.3.4、apiserver配置文件說明
注:/etc/kubernetes/apiserver配置文件含意:
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
##監聽的接口,如果配置爲127.0.0.1則只監聽localhost,配置爲0.0.0.0會監聽所有接口,這裏配置爲0.0.0.0。
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.2.178:2379"
#etcd服務地址,前面已經啓動了etcd服務
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
#kubernetes可以分配的ip的範圍,kubernetes啓動的每一個pod以及serveice都會分配一個ip地址,將從這個範圍中分配IP。
KUBE_ADMISSION_CONTROL="--admission-control=AlwaysAdmit"
#不做限制,允講所有節點可以訪問apiserver ,對所有請求開綠燈。
admission [ədˈmɪ.n] 承認;準講迚入 , Admit [ədˈmɪt] 承認
擴展:
admission-control(准入控制) 概述:admission controller本質上一段代碼,在對kubernetes api的請求過程中,順序爲 先經過認證和授權,然後執行准入操作,最後對目標對象迚行操作。
5.3.5、配置kube-controller-manager配置文件
cat /etc/kubernetes/controller-manager
該配置文件默認不需要改,夠用。
配置kube-scheduler配置文件
5.3.6、配置scheduler配置文件
[root@master html]# vi /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS="--address=0.0.0.0"
#改scheduler監聽到的地址爲:0.0.0.0,默認是127.0.0.1。
5.3.7、etcdctl命令使用
etcdctl 是操作etcd非關係型數據庫的一個命令行客戶端,它能提供一些簡潔的命令,供用戶直接跟 etcd 數據庫打交道。
etcdctl 的命令,大體上分爲數據庫操作和非數據庫操作兩類。數據庫操作主要是圍繞對鍵值和目錄的CRUD完整生命週期的管理。 注:CRUD 即 Create, Read, Update, Delete。
etcd在鍵的組織上採用了層次化的空間結構(類似於文件系統中目錄的概念),用戶指定的鍵可以爲單獨的名字,如 testkey,此時實際上放在根目錄 / 下面,也可以爲指定目錄結構,如 cluster1/node2/testkey,則將創建相應的目錄結構。
5.3.7.1、etcdctl數據庫操作
5.3.7.1、set
指定某個鍵的值。
例如:
[root@master html]# etcdctl set mk "shen"
shen
5.3.7.2、get
[root@master html]# etcdctl get mk
shen
[root@master html]#
[root@master html]# etcdctl set /testdir/testkey "hello world"
hello world
[root@master html]# etcdctl get /testdir/testkey
hello world
[root@master html]#
5.3.7.3、update
當鍵存在時,更新值內容。
[root@master html]# etcdctl update /testdir/testkey aaaa
aaaa
[root@master html]# etcdctl get /testdir/testkey
aaaa
[root@master html]#
5.3.7.4、rm
刪除某個鍵值。
[root@master html]# etcdctl rm mk
PrevNode.Value: shen
[root@master html]# etcdctl rm /testdir/testkey
PrevNode.Value: aaaa
[root@master html]#
5.3.7.5、etcdctl mk和etcdctl set區別
etcdctl mk 如果給定的鍵不存在,則創建一個新的鍵值。如果給定的鍵存在,則報錯,無法創建。etcdctl set ,不管給定的鍵是否存在,都會創建一個新的鍵值。
[root@master html]# etcdctl mk /testdir/testkey "Hello world"
Hello world
[root@master html]# etcdctl mk /testdir/testkey "bbbb"
Error: 105: Key already exists (/testdir/testkey) [728]
[root@master html]#
5.3.7.6、mkdir
創建一個目錄
[root@master html]# etcdctl mkdir testdir1
[root@master html]#
5.3.7.7、ls
列出目錄(默認爲根目錄)下的鍵或者子目錄,默認不顯示子目錄中內容。
[root@master html]# etcdctl ls
/registry
/testdir1
/testdir
/k8s
[root@master html]#
[root@master html]# etcdctl ls /testdir
/testdir/testkey
[root@master html]#
5.3.7.2、etcdctl非數據庫操作
etcdctl member 後面可以加參數list、add、remove 命令。表示列出、添加、刪除 etcd 實例到 etcd 集羣中。
5.3.7.2.1、list
例如本地啓動一個 etcd 服務實例後,可以用如下命令進行查看。
[root@master html]# etcdctl member list
8e9e05c52164694d: name=default peerURLs=http://localhost:2380 clientURLs=http://192.168.2.178:2379 isLeader=true
[root@master html]#
5.3.7.2.2、add
我們這裏就一個etcd服務器,add就不做了。
5.3.7.2.3、remove
我們這裏就一個etcd服務器,add就不做了。
5.3.8、設置etcd網絡
5.3.8.1、創建目錄
[root@master html]# etcdctl mkdir /k8s/network
Error: 105: Key already exists (/k8s/network) [729]
[root@master html]#
創建一個目錄/ k8s/network用於存儲flannel網絡信息 。
我這裏原來已經創建了,所以報錯,沒關係,我們需要創建這個目錄,後面會用到。
5.3.8.2、設置值
[root@master html]# etcdctl set /k8s/network/config '{"Network": "10.255.0.0/16"}'
給/k8s/network/config 賦一個字符串的值 '{"Network": "10.255.0.0/16"}' ,這個後續會用到。
查看我們設置的值
[root@master html]# etcdctl get /k8s/network/config
{"Network":"10.255.0.0/16"}
注:在啓動flannel之前,需要在etcd中添加一條網絡配置記錄,這個配置將用於flannel分配給每個docker的虛擬IP地址段。用於配置在minion上docker的IP地址.
由於flannel將覆蓋docker0上的地址,所以flannel服務要先於docker服務啓動。如果docker服務已經啓動,則先停止docker服務,然後啓動flannel,再啓動docker。
5.9、配置flanneld服務
5.9.1、flannel啓動過程解析
從etcd中獲取出/k8s/network/config的值
劃分subnet子網,並在etcd中進行註冊
將子網信息記錄到/run/flannel/subnet.env中
5.9.2、配置flanneld服務
vi /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://192.168.2.178:2379"
FLANNEL_ETCD_PREFIX="/k8s/network"
FLANNEL_OPTIONS="--iface=ens33"
分別指定etcd服務端地址,網絡路徑和通信的物理網卡。
物理網卡可以通過ifconfig -a查看:
[root@master html]# ifconfig -a
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.2.178 netmask 255.255.255.0 broadcast 192.168.2.255
inet6 fe80::60d3:d739:1f01:9258 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:d8:be:5b txqueuelen 1000 (Ethernet)
RX packets 4618 bytes 421413 (411.5 KiB)
RX errors 0 dropped 2 overruns 0 frame 0
TX packets 13473 bytes 17541537 (16.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
5.9.3、啓動flanneld服務
[root@master html]# systemctl start flanneld
[root@master html]# systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@master html]# systemctl status flanneld
● flanneld.service - Flanneld overlay address etcd agent
Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-12-21 18:35:02 CST; 20s ago
Main PID: 12413 (flanneld)
CGroup: /system.slice/flanneld.service
└─12413 /usr/bin/flanneld -etcd-endpoints=http://192.168.2.178:2379 -etcd-prefix=/k8s/network --iface...
Dec 21 18:35:02 master systemd[1]: Starting Flanneld overlay address etcd agent...
Dec 21 18:35:02 master flanneld-start[12413]: I1221 18:35:02.719599 12413 main.go:132] Installing signal handlers
Dec 21 18:35:02 master flanneld-start[12413]: I1221 18:35:02.720139 12413 manager.go:149] Using interface ....178
Dec 21 18:35:02 master flanneld-start[12413]: I1221 18:35:02.720164 12413 manager.go:166] Defaulting exter...178)
Dec 21 18:35:02 master flanneld-start[12413]: I1221 18:35:02.722622 12413 local_manager.go:179] Picking su...55.0
Dec 21 18:35:02 master flanneld-start[12413]: I1221 18:35:02.741850 12413 manager.go:250] Lease acquired: ...0/24
Dec 21 18:35:02 master flanneld-start[12413]: I1221 18:35:02.742097 12413 network.go:98] Watching for new ...ases
Dec 21 18:35:02 master systemd[1]: Started Flanneld overlay address etcd agent.
Hint: Some lines were ellipsized, use -l to show in full.
[root@master html]#
5.9.4、查看flanneld服務網卡
[root@master html]# ifconfig -a
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.2.178 netmask 255.255.255.0 broadcast 192.168.2.255
inet6 fe80::60d3:d739:1f01:9258 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:d8:be:5b txqueuelen 1000 (Ethernet)
RX packets 4856 bytes 446523 (436.0 KiB)
RX errors 0 dropped 2 overruns 0 frame 0
TX packets 13605 bytes 17565375 (16.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
inet 10.255.91.0 netmask 255.255.0.0 destination 10.255.91.0
inet6 fe80::cc29:5eca:e542:e3fa prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3 bytes 144 (144.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
5.9.5、查看子網信息
[root@master html]# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.255.0.0/16
FLANNEL_SUBNET=10.255.91.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
[root@master html]#
之後將會有一個腳本將subnet.env轉寫成一個docker的環境變量文件/run/flannel/docker。docker0的地址是由 /run/flannel/subnet.env 的 FLANNEL_SUBNET 參數決定的。
[root@master html]# cat /run/flannel/docker
DOCKER_OPT_BIP="--bip=10.255.91.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1472"
DOCKER_NETWORK_OPTIONS=" --bip=10.255.91.1/24 --ip-masq=true --mtu=1472"
[root@master html]#
5.10、啓動master上bube-apiserver服務
[root@master html]# systemctl restart kube-apiserver
[root@master html]# systemctl enable kube-apiserver
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@master html]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-12-21 18:40:02 CST; 32s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 12495 (kube-apiserver)
CGroup: /system.slice/kube-apiserver.service
└─12495 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://192.168.2.178:2379 --i...
5.11、啓動master上kube-controller-manager 服務
[root@master html]# systemctl restart kube-controller-manager
[root@master html]# systemctl enable kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@master html]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-12-21 18:41:47 CST; 13s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 12527 (kube-controller)
CGroup: /system.slice/kube-controller-manager.service
└─12527 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://192.168.2.178:8080
5.12、啓動master上kube-scheduler服務
[root@master html]# systemctl restart kube-scheduler
[root@master html]# systemctl enable kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master html]# systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler Plugin
Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-12-21 18:43:09 CST; 13s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 12559 (kube-scheduler)
CGroup: /system.slice/kube-scheduler.service
└─12559 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://192.168.2.178:8080 --address...
至此etcd和master節點成功。
5.4、配置node1 minion
5.4.1、配置node1上的flanneld
5.4.1.1、從master上拷貝flanneld配置文件
我們直接在node1上執行scp命令,從master複製該文件過來
[root@node1 yum.repos.d]# scp 192.168.2.178:/etc/sysconfig/flanneld /etc/sysconfig/flanneld
The authenticity of host '192.168.2.178 (192.168.2.178)' can't be established.
ECDSA key fingerprint is SHA256:Izdvj8GeIvx0CwRk7VfCqwhWtkpLFkmmFNyxXbRwlZQ.
ECDSA key fingerprint is MD5:37:34:aa:ec:34:95:cd:1a:a0:ce:4a:38:d0:f9:87:de.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.2.178' (ECDSA) to the list of known hosts.
[email protected]'s password:
flanneld 100% 412 38.3KB/s 00:00
5.4.1.2、在node1上啓動flanneld服務
[root@node1 yum.repos.d]# systemctl restart flanneld
[root@node1 yum.repos.d]# systemctl enable flanneld
[root@node1 yum.repos.d]# systemctl status flanneld
● flanneld.service - Flanneld overlay address etcd agent
Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-12-21 19:03:05 CST; 35s ago
Main PID: 2398 (flanneld)
CGroup: /system.slice/flanneld.service
└─2398 /usr/bin/flanneld -etcd-endpoints=http://192.168.2.178:2379 -etcd-prefix=/k8s/network --iface=...
Dec 21 19:03:05 node1 systemd[1]: Starting Flanneld overlay address etcd agent...
Dec 21 19:03:05 node1 flanneld-start[2398]: I1221 19:03:05.372411 2398 main.go:132] Installing signal handlers
Dec 21 19:03:05 node1 flanneld-start[2398]: I1221 19:03:05.372712 2398 manager.go:149] Using interface w...2.179
Dec 21 19:03:05 node1 flanneld-start[2398]: I1221 19:03:05.372724 2398 manager.go:166] Defaulting extern....179)
Dec 21 19:03:05 node1 flanneld-start[2398]: I1221 19:03:05.376068 2398 local_manager.go:179] Picking sub...255.0
Dec 21 19:03:05 node1 flanneld-start[2398]: I1221 19:03:05.382847 2398 manager.go:250] Lease acquired: 1....0/24
Dec 21 19:03:05 node1 flanneld-start[2398]: I1221 19:03:05.384297 2398 network.go:98] Watching for new s...eases
Dec 21 19:03:05 node1 flanneld-start[2398]: I1221 19:03:05.392479 2398 network.go:191] Subnet added: 10.....0/24
Dec 21 19:03:05 node1 systemd[1]: Started Flanneld overlay address etcd agent.
Hint: Some lines were ellipsized, use -l to show in full.
5.4.2、配置node1上master地址
[root@node1 yum.repos.d]# vi /etc/kubernetes/config
KUBE_MASTER="--master=http://192.168.2.178:8080"
飄紅的是master的地址。
5.4.3、配置node1上的kube-proxy服務
kube-proxy的作用主要是負責service的實現,具體來說,就是實現了內部從pod到service。
[root@node1 yum.repos.d]# grep -v '^#' /etc/kubernetes/proxy
KUBE_PROXY_ARGS=""
[root@node1 yum.repos.d]#
不用修改,默認就是監聽所有ip。
注:如果啓動服務失敗,可以使用tail -f /var/log/messages 動態查看日誌。
5.4.4、配置node1 kubelet
Kubelet運行在minion節點上。Kubelet組件管理Pod、Pod中容器及容器的鏡像和卷等信息。
[root@node1 yum.repos.d]# vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
#默認只監聽127.0.0.1,要改成:0.0.0.0,因爲後期要使用kubectl進程連接到kubelet服務上,來查看pod及pod中容器的狀態。如果是127就無法進程連接kubelet服務。
KUBELET_HOSTNAME="--hostname-override=node1"
# minion的主機名,設置成和本主機機名一樣,便於識別。
KUBELET_API_SERVER="--api-servers=http://192.168.2.178:8080"
#指定apiserver的地址
擴展:第17行的意思:
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_POD_INFRA_CONTAINER 指定pod基礎容器鏡像地址。這個是一個基礎容器,每一個Pod啓動的時候都會啓動一個這樣的容器。如果你的本地沒有這個鏡像,kubelet會連接外網把這個鏡像下載下來。最開始的時候是在Google的registry上,因此國內因爲GFW都下載不了導致Pod運行不起來。現在每個版本的Kubernetes都把這個鏡像地址改成紅帽的地址了。你也可以提前傳到自己的registry上,然後再用這個參數指定成自己的鏡像鏈接。
注:https://access.redhat.com/containers/ 是紅帽的容器下載站點
5.4.5、啓動node1上所有服務
[root@node1 yum.repos.d]# systemctl restart flanneld kube-proxy kubelet docker
[root@node1 yum.repos.d]# systemctl enable flanneld kube-proxy kubelet docker
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@node1 yum.repos.d]# systemctl status flanneld kube-proxy kubelet docker
● flanneld.service - Flanneld overlay address etcd agent
Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-12-21 19:16:57 CST; 20s ago
Main PID: 2485 (flanneld)
CGroup: /system.slice/flanneld.service
└─2485 /usr/bin/flanneld -etcd-endpoints=http://192.168.2.178:2379 -etcd-prefix=/k8s/network --iface=ens33
Dec 21 19:16:56 node1 systemd[1]: Starting Flanneld overlay address etcd agent...
Dec 21 19:16:56 node1 flanneld-start[2485]: I1221 19:16:56.894429 2485 main.go:132] Installing signal handlers
Dec 21 19:16:56 node1 flanneld-start[2485]: I1221 19:16:56.896023 2485 manager.go:149] Using interface with name ens....2.179
Dec 21 19:16:56 node1 flanneld-start[2485]: I1221 19:16:56.896048 2485 manager.go:166] Defaulting external address t...2.179)
Dec 21 19:16:57 node1 flanneld-start[2485]: I1221 19:16:57.041167 2485 local_manager.go:134] Found lease (10.255.31....eusing
Dec 21 19:16:57 node1 flanneld-start[2485]: I1221 19:16:57.047416 2485 manager.go:250] Lease acquired: 10.255.31.0/24
Dec 21 19:16:57 node1 flanneld-start[2485]: I1221 19:16:57.052767 2485 network.go:98] Watching for new subnet leases
Dec 21 19:16:57 node1 flanneld-start[2485]: I1221 19:16:57.156192 2485 network.go:191] Subnet added: 10.255.91.0/24
Dec 21 19:16:57 node1 systemd[1]: Started Flanneld overlay address etcd agent.
● kube-proxy.service - Kubernetes Kube-Proxy Server
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-12-21 19:16:56 CST; 20s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 2482 (kube-proxy)
CGroup: /system.slice/kube-proxy.service
└─2482 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://192.168.2.178:8080
Dec 21 19:16:57 node1 kube-proxy[2482]: E1221 19:16:57.979426 2482 server.go:421] Can't get Node "node1", assuming i... found
Dec 21 19:16:57 node1 kube-proxy[2482]: I1221 19:16:57.988178 2482 server.go:215] Using iptables Proxier.
Dec 21 19:16:57 node1 kube-proxy[2482]: W1221 19:16:57.991189 2482 server.go:468] Failed to retrieve node info: node... found
Dec 21 19:16:57 node1 kube-proxy[2482]: W1221 19:16:57.991276 2482 proxier.go:248] invalid nodeIP, initialize kube-p...nodeIP
Dec 21 19:16:57 node1 kube-proxy[2482]: W1221 19:16:57.991284 2482 proxier.go:253] clusterCIDR not specified, unable...raffic
Dec 21 19:16:57 node1 kube-proxy[2482]: I1221 19:16:57.991295 2482 server.go:227] Tearing down userspace rules.
Dec 21 19:16:58 node1 kube-proxy[2482]: I1221 19:16:58.159672 2482 conntrack.go:81] Set sysctl 'net/netfilter/nf_con...131072
Dec 21 19:16:58 node1 kube-proxy[2482]: I1221 19:16:58.160025 2482 conntrack.go:66] Setting conntrack hashsize to 32768
Dec 21 19:16:58 node1 kube-proxy[2482]: I1221 19:16:58.167384 2482 conntrack.go:81] Set sysctl 'net/netfilter/nf_con... 86400
Dec 21 19:16:58 node1 kube-proxy[2482]: I1221 19:16:58.167418 2482 conntrack.go:81] Set sysctl 'net/netfilter/nf_con...o 3600
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-12-21 19:16:59 CST; 17s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 2749 (kubelet)
CGroup: /system.slice/kubelet.service
├─2749 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://192.168.2.178:8080 --address=0.0.0.0 --hostn...
└─2777 journalctl -k -f
Dec 21 19:17:00 node1 kubelet[2749]: W1221 19:17:00.302581 2749 manager.go:247] Registration of the rkt container f...refused
Dec 21 19:17:00 node1 kubelet[2749]: I1221 19:17:00.302604 2749 factory.go:54] Registering systemd factory
Dec 21 19:17:00 node1 kubelet[2749]: I1221 19:17:00.302763 2749 factory.go:86] Registering Raw factory
Dec 21 19:17:00 node1 kubelet[2749]: I1221 19:17:00.302937 2749 manager.go:1106] Started watching for new ooms in manager
Dec 21 19:17:00 node1 kubelet[2749]: I1221 19:17:00.305309 2749 oomparser.go:185] oomparser using systemd
Dec 21 19:17:00 node1 kubelet[2749]: I1221 19:17:00.305820 2749 manager.go:288] Starting recovery of all containers
Dec 21 19:17:00 node1 kubelet[2749]: I1221 19:17:00.402033 2749 manager.go:293] Recovery completed
Dec 21 19:17:00 node1 kubelet[2749]: I1221 19:17:00.403468 2749 kubelet_node_status.go:227] Setting node annotation.../detach
Dec 21 19:17:00 node1 kubelet[2749]: I1221 19:17:00.418293 2749 kubelet_node_status.go:74] Attempting to register node node1
Dec 21 19:17:00 node1 kubelet[2749]: I1221 19:17:00.422640 2749 kubelet_node_status.go:77] Successfully registered node node1
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/docker.service.d
└─flannel.conf
Active: active (running) since Sat 2019-12-21 19:16:59 CST; 17s ago
Docs: http://docs.docker.com
Main PID: 2590 (dockerd-current)
CGroup: /system.slice/docker.service
├─2590 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtim...
└─2603 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim do...
Dec 21 19:16:59 node1 dockerd-current[2590]: time="2019-12-21T19:16:59.594451020+08:00" level=info msg="devmapper: Succ...-base"
Dec 21 19:16:59 node1 dockerd-current[2590]: time="2019-12-21T19:16:59.631433746+08:00" level=warning msg="Docker could...ystem"
Dec 21 19:16:59 node1 dockerd-current[2590]: time="2019-12-21T19:16:59.651862493+08:00" level=info msg="Graph migration...conds"
Dec 21 19:16:59 node1 dockerd-current[2590]: time="2019-12-21T19:16:59.652393784+08:00" level=info msg="Loading contain...tart."
Dec 21 19:16:59 node1 dockerd-current[2590]: time="2019-12-21T19:16:59.690533063+08:00" level=info msg="Firewalld runni...false"
Dec 21 19:16:59 node1 dockerd-current[2590]: time="2019-12-21T19:16:59.862388053+08:00" level=info msg="Loading contain...done."
Dec 21 19:16:59 node1 dockerd-current[2590]: time="2019-12-21T19:16:59.862480340+08:00" level=info msg="Daemon has comp...ation"
Dec 21 19:16:59 node1 dockerd-current[2590]: time="2019-12-21T19:16:59.862493915+08:00" level=info msg="Docker daemon" ...1.12.6
Dec 21 19:16:59 node1 systemd[1]: Started Docker Application Container Engine.
Dec 21 19:16:59 node1 dockerd-current[2590]: time="2019-12-21T19:16:59.883346725+08:00" level=info msg="API listen on /....sock"
Hint: Some lines were ellipsized, use -l to show in full.
[root@node1 yum.repos.d]#
我們可以看到我們啓動了4個服務,有4個running狀態,說明服務啓動都是ok的。
5.4.6、查看網卡
[root@node1 yum.repos.d]# ifconfig -a
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.255.31.1 netmask 255.255.255.0 broadcast 0.0.0.0
ether 02:42:c8:81:a9:a3 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.2.179 netmask 255.255.255.0 broadcast 192.168.2.255
inet6 fe80::ab54:d555:c844:3d6b prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:da:c8:bd txqueuelen 1000 (Ethernet)
RX packets 9438 bytes 9035501 (8.6 MiB)
RX errors 0 dropped 3 overruns 0 frame 0
TX packets 2039 bytes 323350 (315.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
inet 10.255.31.0 netmask 255.255.0.0 destination 10.255.31.0
inet6 fe80::79de:95ca:6514:fbe4 prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3 bytes 144 (144.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
5.4.7、查看kube-proxy
[root@node1 yum.repos.d]# netstat -antup | grep proxy
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 2482/kube-proxy
tcp 0 0 192.168.2.179:35532 192.168.2.178:8080 ESTABLISHED 2482/kube-proxy
tcp 0 0 192.168.2.179:35534 192.168.2.178:8080 ESTABLISHED 2482/kube-proxy
[root@node1 yum.repos.d]#
到此node1 minion節點成了。
5.5、配置node2 minion
所有的minion配置都是一樣,所以我們可以直接拷貝node1的配置文件到node2上即可啓動服務。
5.5.1、拷貝配置文件
在node1上執行拷貝:
[root@node1 yum.repos.d]# scp /etc/sysconfig/flanneld 192.168.2.180:/etc/sysconfig/
[root@node1 yum.repos.d]# scp /etc/kubernetes/config 192.168.2.180:/etc/kubernetes/
[root@node1 yum.repos.d]# scp /etc/kubernetes/proxy 192.168.2.180:/etc/kubernetes/
[root@node1 yum.repos.d]# scp /etc/kubernetes/kubelet 192.168.2.180:/etc/kubernetes/
5.5.2、微改配置文件
注意這個地方需要修改:
[root@node2 ~]# vi /etc/kubernetes/kubelet
KUBELET_HOSTNAME="--hostname-override=node2"
這裏要改成minion 2的主機名。
5.5.3、啓動node2所有服務
[root@node2 ~]# systemctl restart flanneld kube-proxy kubelet docker
[root@node2 ~]# systemctl enable flanneld kube-proxy kubelet docker
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
5.5.4、查看node2上所有服務
[root@node2 ~]# systemctl status flanneld kube-proxy kubelet docker
● flanneld.service - Flanneld overlay address etcd agent
Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-12-21 19:39:06 CST; 1min 25s ago
Main PID: 2403 (flanneld)
CGroup: /system.slice/flanneld.service
└─2403 /usr/bin/flanneld -etcd-endpoints=http://192.168.2.178:2379 -etcd-prefix=/k8s/network --iface=ens33
Dec 21 19:39:06 node2 systemd[1]: Starting Flanneld overlay address etcd agent...
Dec 21 19:39:06 node2 flanneld-start[2403]: I1221 19:39:06.735912 2403 main.go:132] Installing signal handlers
Dec 21 19:39:06 node2 flanneld-start[2403]: I1221 19:39:06.738072 2403 manager.go:149] Using interface with name ens....2.180
Dec 21 19:39:06 node2 flanneld-start[2403]: I1221 19:39:06.738101 2403 manager.go:166] Defaulting external address t...2.180)
Dec 21 19:39:06 node2 flanneld-start[2403]: I1221 19:39:06.786030 2403 local_manager.go:179] Picking subnet in range....255.0
Dec 21 19:39:06 node2 flanneld-start[2403]: I1221 19:39:06.805631 2403 manager.go:250] Lease acquired: 10.255.73.0/24
Dec 21 19:39:06 node2 flanneld-start[2403]: I1221 19:39:06.813400 2403 network.go:98] Watching for new subnet leases
Dec 21 19:39:06 node2 flanneld-start[2403]: I1221 19:39:06.821537 2403 network.go:191] Subnet added: 10.255.91.0/24
Dec 21 19:39:06 node2 flanneld-start[2403]: I1221 19:39:06.821555 2403 network.go:191] Subnet added: 10.255.31.0/24
Dec 21 19:39:06 node2 systemd[1]: Started Flanneld overlay address etcd agent.
● kube-proxy.service - Kubernetes Kube-Proxy Server
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-12-21 19:39:06 CST; 1min 26s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 2404 (kube-proxy)
CGroup: /system.slice/kube-proxy.service
└─2404 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://192.168.2.178:8080
Dec 21 19:39:07 node2 kube-proxy[2404]: E1221 19:39:07.546923 2404 server.go:421] Can't get Node "node2", assuming i... found
Dec 21 19:39:07 node2 kube-proxy[2404]: I1221 19:39:07.548664 2404 server.go:215] Using iptables Proxier.
Dec 21 19:39:07 node2 kube-proxy[2404]: W1221 19:39:07.550017 2404 server.go:468] Failed to retrieve node info: node... found
Dec 21 19:39:07 node2 kube-proxy[2404]: W1221 19:39:07.550089 2404 proxier.go:248] invalid nodeIP, initialize kube-p...nodeIP
Dec 21 19:39:07 node2 kube-proxy[2404]: W1221 19:39:07.550098 2404 proxier.go:253] clusterCIDR not specified, unable...raffic
Dec 21 19:39:07 node2 kube-proxy[2404]: I1221 19:39:07.550110 2404 server.go:227] Tearing down userspace rules.
Dec 21 19:39:07 node2 kube-proxy[2404]: I1221 19:39:07.689675 2404 conntrack.go:81] Set sysctl 'net/netfilter/nf_con...131072
Dec 21 19:39:07 node2 kube-proxy[2404]: I1221 19:39:07.689901 2404 conntrack.go:66] Setting conntrack hashsize to 32768
Dec 21 19:39:07 node2 kube-proxy[2404]: I1221 19:39:07.689987 2404 conntrack.go:81] Set sysctl 'net/netfilter/nf_con... 86400
Dec 21 19:39:07 node2 kube-proxy[2404]: I1221 19:39:07.689998 2404 conntrack.go:81] Set sysctl 'net/netfilter/nf_con...o 3600
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-12-21 19:39:09 CST; 1min 23s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 2660 (kubelet)
CGroup: /system.slice/kubelet.service
├─2660 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://192.168.2.178:8080 --address=0.0.0.0 --hostn...
└─2688 journalctl -k -f
Dec 21 19:39:09 node2 kubelet[2660]: W1221 19:39:09.890184 2660 manager.go:247] Registration of the rkt container f...refused
Dec 21 19:39:09 node2 kubelet[2660]: I1221 19:39:09.890230 2660 factory.go:54] Registering systemd factory
Dec 21 19:39:09 node2 kubelet[2660]: I1221 19:39:09.890446 2660 factory.go:86] Registering Raw factory
Dec 21 19:39:09 node2 kubelet[2660]: I1221 19:39:09.890648 2660 manager.go:1106] Started watching for new ooms in manager
Dec 21 19:39:09 node2 kubelet[2660]: I1221 19:39:09.895816 2660 oomparser.go:185] oomparser using systemd
Dec 21 19:39:09 node2 kubelet[2660]: I1221 19:39:09.896338 2660 manager.go:288] Starting recovery of all containers
Dec 21 19:39:09 node2 kubelet[2660]: I1221 19:39:09.982917 2660 manager.go:293] Recovery completed
Dec 21 19:39:09 node2 kubelet[2660]: I1221 19:39:09.998995 2660 kubelet_node_status.go:227] Setting node annotation.../detach
Dec 21 19:39:10 node2 kubelet[2660]: I1221 19:39:10.000193 2660 kubelet_node_status.go:74] Attempting to register node node2
Dec 21 19:39:10 node2 kubelet[2660]: I1221 19:39:10.004958 2660 kubelet_node_status.go:77] Successfully registered node node2
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/docker.service.d
└─flannel.conf
Active: active (running) since Sat 2019-12-21 19:39:09 CST; 1min 23s ago
Docs: http://docs.docker.com
Main PID: 2496 (dockerd-current)
CGroup: /system.slice/docker.service
├─2496 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtim...
└─2512 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim do...
Dec 21 19:39:09 node2 dockerd-current[2496]: time="2019-12-21T19:39:09.075412800+08:00" level=info msg="devmapper: Succ...-base"
Dec 21 19:39:09 node2 dockerd-current[2496]: time="2019-12-21T19:39:09.111063008+08:00" level=warning msg="Docker could...ystem"
Dec 21 19:39:09 node2 dockerd-current[2496]: time="2019-12-21T19:39:09.129224027+08:00" level=info msg="Graph migration...conds"
Dec 21 19:39:09 node2 dockerd-current[2496]: time="2019-12-21T19:39:09.130233548+08:00" level=info msg="Loading contain...tart."
Dec 21 19:39:09 node2 dockerd-current[2496]: time="2019-12-21T19:39:09.158633597+08:00" level=info msg="Firewalld runni...false"
Dec 21 19:39:09 node2 dockerd-current[2496]: time="2019-12-21T19:39:09.232333949+08:00" level=info msg="Loading contain...done."
Dec 21 19:39:09 node2 dockerd-current[2496]: time="2019-12-21T19:39:09.232422260+08:00" level=info msg="Daemon has comp...ation"
Dec 21 19:39:09 node2 dockerd-current[2496]: time="2019-12-21T19:39:09.232432256+08:00" level=info msg="Docker daemon" ...1.12.6
Dec 21 19:39:09 node2 dockerd-current[2496]: time="2019-12-21T19:39:09.249587277+08:00" level=info msg="API listen on /....sock"
Dec 21 19:39:09 node2 systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.
[root@node2 ~]#
5.5.5、查看網卡
[root@node2 ~]# ifconfig -a
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.255.73.1 netmask 255.255.255.0 broadcast 0.0.0.0
ether 02:42:48:7f:f8:3c txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.2.180 netmask 255.255.255.0 broadcast 192.168.2.255
inet6 fe80::950e:ef22:9f8d:fed6 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:4e:26:64 txqueuelen 1000 (Ethernet)
RX packets 8491 bytes 8987676 (8.5 MiB)
RX errors 0 dropped 3 overruns 0 frame 0
TX packets 1363 bytes 263495 (257.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
inet 10.255.73.0 netmask 255.255.0.0 destination 10.255.73.0
inet6 fe80::e105:ef44:47f9:b20d prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3 bytes 144 (144.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
注:flannel0是flanneld服務啓動的。
上面的docker0網卡的值是根據以下來的
[root@node2 ~]# cat /run/flannel/
docker subnet.env
[root@node2 ~]# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.255.0.0/16
FLANNEL_SUBNET=10.255.73.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
[root@node2 ~]#
5.5.6、查看kube-proxy
[root@node2 ~]# netstat -antup | grep proxy
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 2404/kube-proxy
tcp 0 0 192.168.2.180:58772 192.168.2.178:8080 ESTABLISHED 2404/kube-proxy
tcp 0 0 192.168.2.180:58770 192.168.2.178:8080 ESTABLISHED 2404/kube-proxy
[root@node2 ~]#
Kubeproxy監聽端口號是10249。Kubeproxy和master的8080進行通信。
[root@node2 ~]# netstat -antup | grep 8080
tcp 0 0 192.168.2.180:58772 192.168.2.178:8080 ESTABLISHED 2404/kube-proxy
tcp 0 0 192.168.2.180:58770 192.168.2.178:8080 ESTABLISHED 2404/kube-proxy
tcp 0 0 192.168.2.180:58792 192.168.2.178:8080 ESTABLISHED 2660/kubelet
tcp 0 0 192.168.2.180:58784 192.168.2.178:8080 ESTABLISHED 2660/kubelet
tcp 0 0 192.168.2.180:58786 192.168.2.178:8080 ESTABLISHED 2660/kubelet
tcp 0 0 192.168.2.180:58782 192.168.2.178:8080 ESTABLISHED 2660/kubelet
查看kubelete服務連接是否建立,此處我們發現已經建立。
5.6、查看整個k8s集羣狀態
[root@master html]# kubectl get nodes
NAME STATUS AGE
node1 Ready 32m
node2 Ready 10m
[root@master html]#
至此說明運行正常。
至此,整個k8s集羣搭建完畢。
5.7、集羣服務總結
在本實驗中kubernetes 4個結點一共需要啓動13個服務,開6個端口號。
詳情如下:
5.7.1、etcd
一共1個服務 ,通訊使用2379端口
啓動服務
[root@xuegod63 ~]#systemctl restart etcd
[root@master html]# netstat -antup | grep etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 982/etcd
tcp 0 0 192.168.2.178:2379 0.0.0.0:* LISTEN 982/etcd
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 982/etcd
5.7.2、master
一共4個服務,通訊使用8080端口
[root@xuegod63 ~]# systemctl restart kube-apiserver kube-controller-manager kube-scheduler flanneld
[root@master html]# netstat -antup | grep kube-apiserve
tcp6 0 0 :::6443 :::* LISTEN 12495/kube-apiserve
tcp6 0 0 :::8080 :::* LISTEN 12495/kube-apiserve
[root@master ~]# netstat -antup | grep kube-controll
tcp6 0 0 :::10252 :::* LISTEN 12527/kube-controll
[root@master ~]# netstat -antup | grep kube-schedule
tcp6 0 0 :::10251 :::* LISTEN 12559/kube-schedule
[root@master ~]# netstat -antup | grep flanneld
tcp 0 0 192.168.2.178:60480 192.168.2.178:2379 ESTABLISHED 12413/flanneld
tcp 0 0 192.168.2.178:60482 192.168.2.178:2379 ESTABLISHED 12413/flanneld
udp 0 0 192.168.2.178:8285 0.0.0.0:* 12413/flanneld
5.7.3、node1-minion
一共4個服務
kubeproxy 監控聽端口號是 10249 ,kubelet 監聽端口10248、10250、10255三個端口
[root@node1 ~]# systemctl restart flanneld kube-proxy kubelet docker
[root@node1 yum.repos.d]# netstat -autup | grep kube-proxy
tcp 0 0 localhost:10249 0.0.0.0:* LISTEN 2482/kube-proxy
tcp 0 0 node1:35532 master:webcache ESTABLISHED 2482/kube-proxy
tcp 0 0 node1:35534 master:webcache ESTABLISHED 2482/kube-proxy
[root@node1 yum.repos.d]# netstat -autup | grep kubelet
tcp 0 0 localhost:10248 0.0.0.0:* LISTEN 2749/kubelet
tcp 0 0 node1:35546 master:webcache ESTABLISHED 2749/kubelet
tcp 0 0 node1:35544 master:webcache ESTABLISHED 2749/kubelet
tcp 0 0 node1:35550 master:webcache ESTABLISHED 2749/kubelet
tcp 0 0 node1:35542 master:webcache ESTABLISHED 2749/kubelet
tcp6 0 0 [::]:10255 [::]:* LISTEN 2749/kubelet
tcp6 0 0 [::]:4194 [::]:* LISTEN 2749/kubelet
tcp6 0 0 [::]:10250 [::]:* LISTEN 2749/kubelet
5.7.4、node2-minion
一共4個服務
[root@node2 ~]# systemctl restart flanneld kube-proxy kubelet docker
6、Kubectl工具
管理kubernetes容器平臺
6.1、Kubectl概述
kubectl是一個用於操作kubernetes集羣的命令行接口,通過利用kubectl的各種命令可以實現各種功能。
6.2、啓動相關服務
systemctl restart kube-apiserver kube-controller-manager kube-scheduler flanneld
6.3、獲取集羣狀態
[root@master html]# kubectl get nodes
NAME STATUS AGE
node1 Ready 3h
node2 Ready 2h
6.4、查看版本
[root@master html]# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-07-03T15:31:10Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-07-03T15:31:10Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
6.5、kubectl常用命令
kubectl創建和刪除一個pod相關操作
命令 |
說明 |
run |
在集羣上運行一個pod |
create |
使用文件或者標準輸入的方式創建一個pod |
delete |
使用文件或者標準輸入以及資源名稱或者標籤選擇器來刪除某個pod |
6.6、kubectl使用舉例
在集羣上運行一個鏡像
6.6.1、上傳基礎鏡像到所有node節點
上傳docker.io-nginx.tar和pod-infrastructure.tar兩個文件到node1和node2
[root@node1 ~]# pwd
/root
[root@node1 ~]# ll
total 323096
-rw-------. 1 root root 1294 Dec 18 19:58 anaconda-ks.cfg
-rw-r--r-- 1 root root 112218624 Dec 18 19:28 docker.io-nginx.tar
-rw-r--r-- 1 root root 218623488 Dec 18 19:28 pod-infrastructure.tar
[root@node1 ~]#
傳到node2
[root@node1 ~]# scp docker.io-nginx.tar 192.168.2.180:/root/
[root@node1 ~]# scp pod-infrastructure.tar 192.168.2.180:/root/
如果node1和node2服務器上沒有docker.io-nginx.tar和pod-infrastructure.tar鏡像,後期使用時會自動在dockerhub上下載此鏡像,這樣比較慢,所以我們提前上傳到服務器上。 其中,pod-infrastructure.tar是pod的基礎鏡像,使用docker-io-nginx的鏡像時,也是需要依賴於此鏡像的。
6.6.2、所有node節點上導入鏡像
在node1和node2上導入鏡像
node1:
[root@node1 ~]# pwd
/root
[root@node1 ~]# docker load -i docker.io-nginx.tar
cec7521cdf36: Loading layer [==================================================>] 58.44 MB/58.44 MB
350d50e58b6c: Loading layer [==================================================>] 53.76 MB/53.76 MB
63c39cd4a775: Loading layer [==================================================>] 3.584 kB/3.584 kB
Loaded image: docker.io/nginx:latest
[root@node1 ~]# docker load -i pod-infrastructure.tar
f1f88d1c363a: Loading layer [==================================================>] 205.9 MB/205.9 MB
bb4f52dd78f6: Loading layer [==================================================>] 10.24 kB/10.24 kB
c82569247c35: Loading layer [==================================================>] 12.73 MB/12.73 MB
Loaded image: registry.access.redhat.com/rhel7/pod-infrastructure:latest
[root@node1 ~]#
node2:
[root@node2 ~]# ls
anaconda-ks.cfg docker.io-nginx.tar pod-infrastructure.tar
[root@node2 ~]# pwd
/root
[root@node2 ~]# docker load -i docker.io-nginx.tar
cec7521cdf36: Loading layer [==================================================>] 58.44 MB/58.44 MB
350d50e58b6c: Loading layer [==================================================>] 53.76 MB/53.76 MB
63c39cd4a775: Loading layer [==================================================>] 3.584 kB/3.584 kB
Loaded image: docker.io/nginx:latest
[root@node2 ~]# docker load -i pod-infrastructure.tar
f1f88d1c363a: Loading layer [==================================================>] 205.9 MB/205.9 MB
bb4f52dd78f6: Loading layer [==================================================>] 10.24 kB/10.24 kB
c82569247c35: Loading layer [==================================================>] 12.73 MB/12.73 MB
Loaded image: registry.access.redhat.com/rhel7/pod-infrastructure:latest
[root@node2 ~]#
6.6.3、所有node節點查看鏡像
在兩個node節點上導完鏡像後可以查看一下鏡像
[root@node1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/nginx latest 9e7424e5dbae 2 years ago 108.5 MB
registry.access.redhat.com/rhel7/pod-infrastructure latest 1158bd68df6d 2 years ago 208.6 MB
可以看到node1上有剛纔導入的兩個鏡像。
[root@node2 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/nginx latest 231d40e811cd 4 weeks ago 126.3 MB
<none> <none> 9e7424e5dbae 2 years ago 108.5 MB
registry.access.redhat.com/rhel7/pod-infrastructure latest 1158bd68df6d 2 years ago 208.6 MB
[root@node2 ~]#
同樣node2上也有兩個剛纔導入的鏡像。後面我們創建pod時就直接使用這兩個鏡像即可。
6.6.4、kubectl run語法
kubectl run和docker run一樣,kubectl run能將一個pod運行起來。
語法:
kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas]
6.6.5、啓動pod
[root@master html]# kubectl run nginx --image=docker.io/nginx --replicas=1 --port=9000
deployment "nginx" created
注:使用docker.io/nginx鏡像 ,--port=暴露容器端口9000 ,設置副本數1
注: 如果docker.io/nginx鏡像沒有,那麼node1和node2會自動在dockerhub上下載。也可以改成自己的私有倉庫地址:--image=192.168.2.178:5000/nginx:1.12
[root@node1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/nginx latest 9e7424e5dbae 2 years ago 108.5 MB
registry.access.redhat.com/rhel7/pod-infrastructure latest 1158bd68df6d 2 years ago 208.6 MB
[root@node1 ~]#
kubectl run之後,kubernetes創建了一個deployment
6.6.6、查看Deployment
[root@master html]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1 1 1 0 1m
[root@master html]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1 1 1 1 38m
[root@master html]#
查看生成的pod,kubernetes將容器運行在pod中以方便實施卷和網絡共享等管理
6.6.7、查看pod
[root@master html]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-2187705812-mg1sg 0/1 ContainerCreating 0 2m
[root@master html]#
[root@master html]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-2187705812-mg1sg 1/1 Running 0 39m
[root@master html]#
當首次創建pod的時候,這個狀態從ContainerCreating變成Running在我的虛機上花了很長時間,估計十幾分鍾以上是有的。
6.6.8、查看pod詳細信息
[root@master html]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-2187705812-mg1sg 1/1 Running 0 55m 10.255.73.2 node2
這樣可以看到這個pod具體運行在哪個節點上,本例中可以看到是運行在node2上。
此時,我們可以在master節點上ping這個pod的id10.255.73.2,是可以ping通的。
[root@master html]# ping 10.255.73.2
PING 10.255.73.2 (10.255.73.2) 56(84) bytes of data.
64 bytes from 10.255.73.2: icmp_seq=1 ttl=61 time=0.795 ms
64 bytes from 10.255.73.2: icmp_seq=2 ttl=61 time=0.946 ms
^C
--- 10.255.73.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.795/0.870/0.946/0.081 ms
6.6.9、查看指定的pod詳情
可以使用如下的命令查看具體的pod的詳情
[root@master html]# kubectl describe pod nginx-2187705812-mg1sg
Name: nginx-2187705812-mg1sg
Namespace: default
Node: node2/192.168.2.180
Start Time: Sat, 21 Dec 2019 22:58:59 +0800
Labels: pod-template-hash=2187705812
run=nginx
Status: Running
IP: 10.255.73.2
Controllers: ReplicaSet/nginx-2187705812
Containers:
nginx:
Container ID: docker://e511702ff41c7b41141dac923ee5a56a8b3b460565544853cbf93668848e5638
Image: docker.io/nginx
Image ID: docker-pullable://docker.io/nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566
Port: 9000/TCP
State: Running
Started: Sat, 21 Dec 2019 23:14:09 +0800
Ready: True
Restart Count: 0
Volume Mounts: <none>
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
No volumes.
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
41m 41m 1 {default-scheduler } Normal Scheduled Successfully assigned nginx-2187705812-mg1sg to node2
41m 41m 1 {kubelet node2} spec.containers{nginx} Normal Pulling pulling image "docker.io/nginx"
41m 25m 2 {kubelet node2} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
25m 25m 1 {kubelet node2} spec.containers{nginx} Normal Pulled Successfully pulled image "docker.io/nginx"
25m 25m 1 {kubelet node2} spec.containers{nginx} Normal Created Created container with docker id e511702ff41c; Security:[seccomp=unconfined]
25m 25m 1 {kubelet node2} spec.containers{nginx} Normal Started Started container with docker id e511702ff41c
6.6.10、pods常見的狀態
1、ContainerCreating #容器創建中
2、ImagePullBackOff #從後端把鏡像拉取到本地
注:如果這裏pod沒有正常運行,都是因爲docker hub沒有連接上,導致鏡像沒有下載成功,這時,可以在node節點上把相關鏡像手動上傳一下或把docker源換成阿里雲的。
3、terminating ['tɜ:mɪneɪtɪŋ #終止 。當刪除pod時的狀態
4、Running 正常運行狀態
6.6.11、kubectl刪除pod
使用kubectl delete刪除創建的對象
刪除pod
[root@master html]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-2187705812-mg1sg 1/1 Running 0 1h
[root@master html]# kubectl delete pod nginx-2187705812-mg1sg
pod "nginx-2187705812-mg1sg" deleted
[root@master html]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-2187705812-4tmkx 0/1 ContainerCreating 0 4s
[root@master html]#
這裏很奇怪,我們刪除這個鏡像後,平臺會自動構建一個新的鏡像,來代替我們剛剛刪除的那個鏡像。這是正是replicas爲1的作用,平臺會一直保證有一個副本在運行。 過了大概十幾分鍾又起來一個pod。
[root@master html]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-2187705812-4tmkx 1/1 Running 0 17m
在這個地方我們在起一個pod
6.6.12、再創建一個pod
[root@master html]# kubectl run nginx01 --image=docker.io/nginx --replicas=1 --port=9009
deployment "nginx01" created
注意:鏡像可以用一樣的,但是pod的名字和端口不能重複了。
[root@master html]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-2187705812-4tmkx 1/1 Running 0 21m
nginx01-3827941023-h4vql 1/1 Running 0 1m
[root@master html]#
可以看到之前構建的兩個pod都是running狀態了。
6.6.13、刪除deployment
直接刪除pod觸發了replicas的確保機制,所以我需要直接刪除deployment。也就是說刪除pod並不能真的刪除pod,如果想完全刪除pod,我們可以刪除deployment。
[root@master html]# kubectl delete deployment nginx01
deployment "nginx01" deleted
[root@master html]#
我們直接刪除剛纔新建的nginx01的pod。
[root@master html]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-2187705812-4tmkx 1/1 Running 0 25m
[root@master html]#
檢查發現目前只有一個pod了,就是我們之前建的名爲nginx的pod了。
7、Yaml
7.1、yaml語法
yaml語法的基本語法規則如下:
1、大小寫敏感
2、使用縮進表示層級關係
3、縮進時不允許使用Tab鍵,只允許使用空格。
4、縮進的空格數目不重要,叧要相同層級的元素左側對齊即可
5、# 表示註釋,從這個字符一直到行尾,都會被解析器忽略。
6、在yaml裏面,連續的項目(如:數組元素、集合元素)通過減號“-”來表示,map結構裏面的鍵值對(key/value)用冒號“:”來分割。
7.2、Yaml數據結構
對象:鍵值對的集合,又稱爲映射(mapping)/ 哈希(hashes) / 字典(dictionary)
數組:一組按次序排列的值,又稱爲序列(sequence) / 列表(list)
純量(scalars):單個的、不可再分的值
7.2.1、數據結構--對象
對象的一組鍵值對,使用冒號結構表示。
例1:animal代表 pets # pet [pet] 寵物
animal: pets
Yaml 也允許另一種寫法,將所有鍵值對寫成一個行內對象。
例2:hash對象中包括 name和foo
hash:
name: Steve
foo: bar
或
hash: { name: Steve, foo: bar }
7.2.2、數組
一組連詞線開頭的行,構成一個數組。
- Cat
- Dog
- Goldfish
轉爲 JavaScript 如下。
[ 'Cat', 'Dog', 'Goldfish' ]
數據結構的子成員是一個數組,則可以在該項下面縮進一個空格。數組中還有數組。
-
- Cat
- Dog
- Goldfish
轉爲 JavaScript 如下。
[ [ 'Cat', 'Dog', 'Goldfish' ] ]
數組也可以採用行內表示法。
animal: [Cat, Dog]
轉爲 JavaScript 如下。
{ animal: [ 'Cat', 'Dog' ] }
7.2.3、複合結構
對象和數組可以結合使用,形成複合結構。
例:編寫一個包括BAT基本信息的bat.yaml配置文件
[root@master ~]# vim bat.yaml #寫入以下內容
bat:
website:
baidu: http://www.baidu.com
qq: http://www.qq.com
ali:
- http://www.taobao.com
- http://www.tmall.com
ceo:
yanhongli: 李彥宏
huatengma: 馬化騰
yunma: 馬雲
注:格式如下
對象 : 對象: 對象:鍵值 對象: - 數組 - 數組
7.2.4、純量
純量是最基本的、不可再分的值。如:字符串、布爾值、整數、浮點數、Null、時間、日期
例:數值直接以字面量的形式表示。
number: 12.30
7.3、應用舉例
kubectl create加載yaml文件生成deployment
使用kubectl run在設定很複雜的需求時,需要非常長的一條語句,也很容易出錯,也沒法保存。所以更多場景下會使用yaml或者json文件。
生成mysql-deployment.yaml 文件
上傳mysql服務器鏡像docker.io-mysql-mysql-server.tar 到node1和node2上
7.3.1、上傳鏡像到所有node節點
[root@node1 ~]# ls
anaconda-ks.cfg docker.io-mysql-mysql-server.tar docker.io-nginx.tar pod-infrastructure.tar
[root@node1 ~]# pwd
/root
scp到node2
[root@node1 ~]# scp docker.io-mysql-mysql-server.tar node2:/root/
[root@node2 ~]# pwd
/root
[root@node2 ~]# ls
anaconda-ks.cfg docker.io-mysql-mysql-server.tar docker.io-nginx.tar pod-infrastructure.tar
[root@node2 ~]#
7.3.2、上傳mysql-deployment.yaml到master管理結點
[root@master ~]# pwd
/root
[root@master ~]# ls
anaconda-ks.cfg bat.yaml mysql-deployment.yaml
[root@master ~]#
7.3.3、在node1和node2上導入mysql鏡像
[root@node1 ~]# pwd
/root
[root@node1 ~]# ls
anaconda-ks.cfg docker.io-mysql-mysql-server.tar docker.io-nginx.tar pod-infrastructure.tar
[root@node1 ~]# docker load -i docker.io-mysql-mysql-server.tar
0302be4b1718: Loading layer [==================================================>] 124.3 MB/124.3 MB
f9deff9cb67e: Loading layer [==================================================>] 128.7 MB/128.7 MB
c4c921f94c30: Loading layer [==================================================>] 9.216 kB/9.216 kB
0c39b2c234c8: Loading layer [==================================================>] 3.072 kB/3.072 kB
Loaded image: docker.io/mysql/mysql-server:latest
[root@node1 ~]#
[root@node2 ~]# pwd
/root
[root@node2 ~]# ls
anaconda-ks.cfg docker.io-mysql-mysql-server.tar docker.io-nginx.tar pod-infrastructure.tar
[root@node2 ~]# docker load -i docker.io-mysql-mysql-server.tar
0302be4b1718: Loading layer [==================================================>] 124.3 MB/124.3 MB
f9deff9cb67e: Loading layer [==================================================>] 128.7 MB/128.7 MB
c4c921f94c30: Loading layer [==================================================>] 9.216 kB/9.216 kB
0c39b2c234c8: Loading layer [==================================================>] 3.072 kB/3.072 kB
Loaded image: docker.io/mysql/mysql-server:latest
7.3.4、Mysql-deployment.yaml文件內容
kind: Deployment
#使用deployment創建一個pod資源,舊k8s版本可以使用kind: ReplicationController來創建pod
apiVersion: extensions/v1beta1
metadata:
name: mysql
#deployment的名稱,全局唯一
spec:
replicas: 1
#Pod副本期待數量,1表示叧運行一個pod,裏面一個容器
template:
#根據此模板創建Pod的副本(實例)
metadata:
labels:
#符合目標的Pod擁有此標籤。默認和name的值一樣
name: mysql
#定義Pod的名字是mysql
spec:
containers:
# Pod中容器的定義部分
- name: mysql
#容器的名稱
image: docker.io/mysql/mysql-server
#容器對應的Docker Image鏡像
imagePullPolicy: IfNotPresent
#默認值爲:imagePullPolicy: Always一直從外網下載鏡像,不用使用本地的。
#其他鏡像下載策略參數說明:
#IfNotPresent :如果本地存在鏡像就優先使用本地鏡像。這樣可以直接使用本地鏡像,加快啓動速度。Present [ˈpreznt] 目前;現在
#Never:直接不再去拉取鏡像了,使用本地的;如果本地不存在就報異常了。
ports:
- containerPort: 3306
#容器暴露的端口號
protocol: TCP
env:
#注入到容器的環境變量
- name: MYSQL_ROOT_PASSWORD
#設置mysql root的密碼
value: "hello123"
7.3.5、該yaml文件結構
注: mysql-deployment.yaml 文件結構:
Deployment的定義
pod的定義
容器的定義
7.3.6、通過yaml文件創建資源
使用mysql-deployment.yaml創建和刪除mysql資源
[root@master ~]# kubectl create -f mysql-deployment.yaml
deployment "mysql" created
[root@master ~]#
注:當一個目錄下,有多個yaml文件的時候,使用kubectl create -f 目錄 的方式一下全部創建
[root@master tmp]# kubectl create -f yamls/
deployment "mysql" created
deployment " mysql 1" created
使用get參數查看pod詳細信息
7.3.7、查看創建的pod
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mysql-2261771434-x62bx 1/1 Running 0 1m
nginx-2187705812-4tmkx 1/1 Running 0 20h
[root@master ~]#
7.3.8、查看創建的deployment
[root@master ~]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mysql 1 1 1 1 2m
nginx 1 1 1 1 21h
7.3.9、查看pod詳情
[root@master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
mysql-2261771434-x62bx 1/1 Running 0 3m 10.255.73.2 node2
nginx-2187705812-4tmkx 1/1 Running 0 20h 10.255.31.2 node1
加上-o wide參數可以查看更詳細的信息,比如看到此pod在哪個node上運行,此pod的集羣IP是多少也被一併顯示了
注:10.255.73.2這個IP地址是flannel 中定義的網段中的一個IP地址。 pod通過這個IP和master進行通信
7.3.10、查看service
[root@master ~]# kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 3d
我們當前沒有創建過服務,只有一個默認的kubernetes的服務。所以我們當前訪問不了mysql服務。
7.3.11、測試通信
#可以ping通
[root@master ~]# ping 10.255.73.2
PING 10.255.73.2 (10.255.73.2) 56(84) bytes of data.
64 bytes from 10.255.73.2: icmp_seq=1 ttl=61 time=0.522 ms
64 bytes from 10.255.73.2: icmp_seq=2 ttl=61 time=3.05 ms
^C
--- 10.255.73.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.522/1.790/3.058/1.268 ms
[root@master ~]# ping 10.255.73.1
PING 10.255.73.1 (10.255.73.1) 56(84) bytes of data.
64 bytes from 10.255.73.1: icmp_seq=1 ttl=62 time=0.587 ms
64 bytes from 10.255.73.1: icmp_seq=2 ttl=62 time=2.02 ms
^C
--- 10.255.73.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.587/1.308/2.029/0.721 ms
[root@master ~]#
#ping node2上的docker0的地址,也可以通的。
總結: master,node2,pod,docker,container 它們之間通信都是使用flannel分配的地址。也就是通過flannel隧道把物理上分開的主機和容器,連接在一個局域網中了。
7.3.12、回顧:flannel地址的配置
該部分內容無需在做,因爲我們前面已經做過了。
7.3.12.1、設置etcd網絡
[root@xuegod63 ~]# etcdctl mkdir /k8s/network #創建一個目錄/ k8s/network用於存儲flannel網絡信息
7.3.12.2、獲取網絡值
master:
etcdctl set /k8s/network/config '{"Network": "10.255.0.0/16"}'
#給/k8s/network/config 賦一個字符串的值 '{"Network": "10.255.0.0/16"}'
在這裏配置的。最終是存儲etcd中的
7.3.13、在node2上查看運行mysql docker實例
[root@node2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2fe9d8e43131 docker.io/mysql/mysql-server "/entrypoint.sh mysql" 19 minutes ago Up 19 minutes (healthy) k8s_mysql.31ec27ee_mysql-2261771434-x62bx_default_fdd4f3fb-24b6-11ea-ade8-000c29d8be5b_6872c9d5
57f3831e25ec registry.access.redhat.com/rhel7/pod-infrastructure:latest "/usr/bin/pod" 19 minutes ago Up 19 minutes k8s_POD.1d520ba5_mysql-2261771434-x62bx_default_fdd4f3fb-24b6-11ea-ade8-000c29d8be5b_964c94c5
[root@node2 ~]#
發現有兩個docker實例在運行,底下那個是pod的基礎服務鏡像,要運行mysql,必須要先把pod-infrastructure鏡像運行起來。
7.3.14、簡寫總結
get命令能夠確認的信息類別:
deployments (縮寫 deploy)
events (縮寫 ev)
namespaces (縮寫 ns)
nodes (縮寫 no)
pods (縮寫 po)
replicasets (縮寫 rs)
replicationcontrollers (縮寫 rc)
services (縮寫 svc)
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mysql-2261771434-x62bx 1/1 Running 0 24m
nginx-2187705812-4tmkx 1/1 Running 0 20h
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-2261771434-x62bx 1/1 Running 0 24m
nginx-2187705812-4tmkx 1/1 Running 0 20h
[root@master ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
mysql-2261771434-x62bx 1/1 Running 0 24m
nginx-2187705812-4tmkx 1/1 Running 0 20h
[root@master ~]#
可以發現後面可以簡寫。
7.3.15、kubectl describe
使用describe查看k8s中詳細信息
describe [dɪˈskraɪb] 描述
語法: kubectl describe pod pod名字
語法: kubectl describe node node名字
語法: kubectl describe deployment deployment 名字
使用describe查看pod的詳細描述信息
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mysql-2261771434-x62bx 1/1 Running 0 26m
nginx-2187705812-4tmkx 1/1 Running 0 20h
7.3.15.1、查看pod
[root@master ~]# kubectl describe pod mysql-2261771434-x62bx
Name: mysql-2261771434-x62bx
Namespace: default
Node: node2/192.168.2.180
Start Time: Sun, 22 Dec 2019 20:31:34 +0800
Labels: name=mysql
pod-template-hash=2261771434
Status: Running
IP: 10.255.73.2
Controllers: ReplicaSet/mysql-2261771434
Containers:
mysql:
Container ID: docker://2fe9d8e43131fc0b315fca6a9f72e34b2207c85fe1eca508ae6c6600dbf2f274
Image: docker.io/mysql/mysql-server
Image ID: docker://sha256:a3ee341faefb76c6c4c6f2a4c37c513466f5aae891ca2f3cb70fd305b822f8de
Port: 3306/TCP
State: Running
Started: Sun, 22 Dec 2019 20:31:35 +0800
Ready: True
Restart Count: 0
Volume Mounts: <none>
Environment Variables:
MYSQL_ROOT_PASSWORD: hello123
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
No volumes.
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
26m 26m 1 {default-scheduler } Normal Scheduled Successfully assigned mysql-2261771434-x62bx to node2
26m 26m 2 {kubelet node2} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
26m 26m 1 {kubelet node2} spec.containers{mysql} Normal Pulled Container image "docker.io/mysql/mysql-server" already present on machine
26m 26m 1 {kubelet node2} spec.containers{mysql} Normal Created Created container with docker id 2fe9d8e43131; Security:[seccomp=unconfined]
26m 26m 1 {kubelet node2} spec.containers{mysql} Normal Started Started container with docker id 2fe9d8e43131
[root@master ~]#
通過這個可以查看創建pod時報的錯誤及告警信息,可以看到以下告警:
26m 26m 2 {kubelet node2} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
7.3.15.2、查看node
使用describe查看node的詳細描述信息
[root@master ~]# kubectl describe node node2
Name: node2
Role:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=node2
Taints: <none>
CreationTimestamp: Sat, 21 Dec 2019 19:39:10 +0800
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Sun, 22 Dec 2019 21:04:30 +0800 Sat, 21 Dec 2019 19:39:10 +0800 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Sun, 22 Dec 2019 21:04:30 +0800 Sat, 21 Dec 2019 19:39:10 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 22 Dec 2019 21:04:30 +0800 Sat, 21 Dec 2019 19:39:10 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Sun, 22 Dec 2019 21:04:30 +0800 Sat, 21 Dec 2019 19:39:20 +0800 KubeletReady kubelet is posting ready status
Addresses: 192.168.2.180,192.168.2.180,node2
Capacity:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 2855652Ki
pods: 110
Allocatable:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 2855652Ki
pods: 110
System Info:
Machine ID: c32dec1481dc4cf9b03963bf6e599d20
System UUID: 4CBF4D56-41E3-C9F6-C1F6-F697A64E2664
Boot ID: 2dc7b509-d86d-4cdd-83c6-a9212b209eaa
Kernel Version: 3.10.0-693.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.12.6
Kubelet Version: v1.5.2
Kube-Proxy Version: v1.5.2
ExternalID: node2
Non-terminated Pods: (1 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default mysql-2261771434-x62bx 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
0 (0%) 0 (0%) 0 (0%) 0 (0%)
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
33m 33m 2 {kubelet node2} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "mysql-2261771434-x62bx_default(fdd4f3fb-24b6-11ea-ade8-000c29d8be5b)". Falling back to DNSDefault policy.
[root@master ~]#
7.3.15.3、查看deployment
使用describe查看deployment的詳細描述信息
[root@master ~]# kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mysql 1 1 1 1 36m
nginx 1 1 1 1 22h
[root@master ~]# kubectl describe deploy mysql
Name: mysql
Namespace: default
CreationTimestamp: Sun, 22 Dec 2019 20:31:34 +0800
Labels: name=mysql
Selector: name=mysql
Replicas: 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: mysql-2261771434 (1/1 replicas created)
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
37m 37m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set mysql-2261771434 to 1
[root@master ~]#
7.3.16、Kubectl其他常用命令和參數
命令 說明
logs 取得pod中容器的log信息
exec 在pod中執行一條命令
cp 從容器拷出或向容器拷入文件
attach Attach到一個運行中的容器上,實時查看容器消息
7.3.16.1、生成測試容器資源
實驗環境:先生成一個mysql資源:
kubectl create -f /root /mysql-deployment.yaml
#這個命令在之前前已經生成了。這裏就不用執行了。
7.3.16.2、kubectl logs
類似於docker logs,使用kubectl logs能夠取出pod中鏡像的log,也是故障排除時候的重要信息
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mysql-2261771434-x62bx 1/1 Running 0 44m
nginx-2187705812-4tmkx 1/1 Running 0 21h
[root@master ~]# kubectl logs mysql-2261771434-x62bx
[Entrypoint] MySQL Docker Image 5.7.20-1.1.2
[Entrypoint] Initializing database
[Entrypoint] Database initialized
Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it.
[Entrypoint] ignoring /docker-entrypoint-initdb.d/*
[Entrypoint] Server shut down
[Entrypoint] MySQL init process done. Ready for start up.
[Entrypoint] Starting MySQL 5.7.20-1.1.2
[root@master ~]#
7.3.16.3、kubectl exec
exec命令用於到pod中執行一條命令,到mysql的鏡像中執行cat /etc/my.cnf命令
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mysql-2261771434-x62bx 1/1 Running 0 46m
nginx-2187705812-4tmkx 1/1 Running 0 21h
[root@master ~]# kubectl exec mysql-2261771434-x62bx cat /etc/my.cnf
# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/5.7/en/server-configuration-defaults.html
[mysqld]
#
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M
#
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
#
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M
skip-host-cache
skip-name-resolve
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
secure-file-priv=/var/lib/mysql-files
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[root@master ~]#
成功獲取到了docker裏面的mysql配置文件信息。
7.3.16.4、進入容器
也可以直接進入到容器中,執行相關命令
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mysql-2261771434-x62bx 1/1 Running 0 47m
nginx-2187705812-4tmkx 1/1 Running 0 21h
[root@master ~]# kubectl exec -it mysql-2261771434-x62bx bash
bash-4.2#
7.3.16.5、kubectl cp
7.3.16.5.1、從容器中拷出
用於從容器中拷出hosts文件到物理機的/tmp下
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mysql-2261771434-x62bx 1/1 Running 0 47m
nginx-2187705812-4tmkx 1/1 Running 0 21h
[root@master ~]# kubectl cp mysql-2261771434-x62bx:/etc/hosts /tmp/hosts
error: unexpected EOF
我們查看下kubectl cp的使用說明:
[root@master ~]# kubectl cp --help
Copy files and directories to and from containers.
Examples:
# !!!Important Note!!!
# Requires that the 'tar' binary is present in your container
# image. If 'tar' is not present, 'kubectl cp' will fail.
#發現想要使用kubectl cp你的容器實例中必須有tar庫。
#如果鏡像中tar命令不存在,那麼kubectl cp將拷貝失敗。
那麼我們就在在mysql pod中安裝tar命吧
[root@master ~]# kubectl exec -it mysql-2261771434-x62bx bash
bash-4.2# yum install tar -y
Loaded plugins: ovl
ol7_UEKR4 | 2.5 kB 00:00:00
ol7_latest | 2.7 kB 00:00:00
Package 2:tar-1.26-35.el7.x86_64 already installed and latest version
Nothing to do
bash-4.2# exit
exit
[root@master ~]#
注意:這個安裝tar是使用的在線yum源,需要容器能聯網,而且一步狂慢,我執行了好久(大於3小時)才執行完。
此時我們在嘗試拷貝
[root@master ~]# ll /tmp/hosts
ls: cannot access /tmp/hosts: No such file or directory
[root@master ~]# kubectl cp mysql-2261771434-x62bx:/etc/hosts /tmp/hosts
tar: Removing leading `/' from member names
[root@master ~]# ls /tmp/hosts
/tmp/hosts
7.3.16.5.2、從宿主機拷到容器
從虛機拷貝到容器docker中
進入容器:
[root@master ~]# kubectl exec -it mysql-2261771434-x62bx bash
檢查該文件在容器中是否存在
bash-4.2# ls /tmp/hosts
ls: cannot access /tmp/hosts: No such file or directory
bash-4.2# exit
Exit
拷貝該文件到容器
[root@master ~]# kubectl cp /tmp/hosts mysql-2261771434-x62bx:/tmp/hosts
進入容器
[root@master ~]# kubectl exec -it mysql-2261771434-x62bx bash
再次檢查該文件是否存在
bash-4.2# ls /tmp/hosts
/tmp/hosts
檢查文件內容
bash-4.2# cat /tmp/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.255.73.2 mysql-2261771434-x62bx
bash-4.2#
kubectl attach
kubectl attach用於取得pod中容器的實時信息,可以持續不斷實時的取出消息。像tail -f /var/log/messages 動態查看日誌的作用。
kubectl logs是一次取出所有消息,像cat /etc/passwd
attach [ə.tæʃ] 貼上 附上
[root@master ~]# kubectl attach mysql-2261771434-x62bx
If you don't see a command prompt, try pressing enter.
[Entrypoint] MySQL Docker Image 5.7.20-1.1.2
Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
[Entrypoint] Initializing database
[Entrypoint] Database initialized
[Entrypoint] ignoring /docker-entrypoint-initdb.d/*
Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it.
[Entrypoint] Server shut down
[Entrypoint] MySQL init process done. Ready for start up.
[Entrypoint] Starting MySQL 5.7.20-1.1.2
^C
[root@master ~]
#注: 到現在,所創建nginx和mysql都只是deployment設備硬件資源,並沒有對應service服務,所以現在還不能直接在外網進行訪問nginx和mysql服務。
8、搭建K8s的web管理界面
部署k8s Dashboard web界面
創建dashboard-deployment.yaml deployment配置文件
創建dashboard-service.yaml service配置文件
準備kubernetes相關的鏡像
啓動dashboard的deployment和service
排錯經驗分享
查看kubernetes dashboard web界面
銷燬web界面相關應用
在kubernetes上面的集羣上搭建基於redis和docker的留言簿案例
創建Redis master deployment配置文件
創建redis master service配置文件
創建redis slave deployment配置文件
創建slave service配置文件
創建frontend guestbook deployment配置文件
創建frontend guestbook service配置文件
查看外部網絡訪問guestbook
8.1、搭建步驟
準備deployment和service 配置文件-》導入相關鏡像-》啓動deployment和service
8.1.1、確認集羣環境
[root@master ~]# kubectl get node
NAME STATUS AGE
node1 Ready 2d
node2 Ready 2d
[root@master ~]#
8.1.2、上傳yaml文件到master
在master上上傳dashboard-deployment.yaml和dashboard-service.yaml到/root/目錄
[root@master ~]# ls
anaconda-ks.cfg bat.yaml dashboard-deployment.yaml dashboard-service.yaml mysql-deployment.yaml
[root@master ~]#
8.1.3、修改apiserver
需要把apiserver改成自己的master地址
[root@master ~]# vi dashboard-deployment.yaml
args:
- --apiserver-host=http://192.168.2.178:8080
[root@master ~]# cat dashboard-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Keep the name in sync with image version and
# gce/coreos/kube-manifests/addons/dashboard counterparts
name: kubernetes-dashboard-latest
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
version: latest
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: kubernetes-dashboard
image: docker.io/bestwu/kubernetes-dashboard-amd64:v1.6.3
imagePullPolicy: IfNotPresent
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
args:
- --apiserver-host=http://192.168.2.178:8080
# - --apiserver-host=http://192.168.2.178:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
[root@master ~]# cat dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090
[root@master ~]#
8.1.4、配置文件說明
image: docker.io/bestwu/kubernetes-dashboard-amd64:v1.6.3
8.1.4.1、關於deployment文件說明
#這裏修改成可以找到的鏡像源。/bestwu/kubernetes-dashboard-amd64 這個是一箇中文的web界面鏡像。
limits: #關於pod使用的cpu和內存硬件資源做限制 。
args:
- --apiserver-host=http://192.168.2.178:8080
#使用該deployment文件時,這裏寫成自己的apiserver服務器地址和端口。
8.1.4.2、關於service文件說明
name: kubernetes-dashboard
#這裏要和上面的deployment中定義一樣
namespace: kube-system
#這裏要和上面的deployment中定義一樣
8.1.5、service的三種端口
8.1.5.1、port
service暴露在cluster ip上的端口,port 是提供給集羣內部客戶訪問service的入口。
8.1.5.2、nodePort
nodePort是k8s提供給集羣外部客戶訪問service入口的一種方式。
8.1.5.3、targetPort
targetPort是pod中容器實例上的端口,從port和nodePort上到來的數據最終經過kube-proxy流入到後端pod的targetPort上迚入容器。
8.1.5.4、圖示說明
8.1.5.5、port、nodePort總結
port和nodePort都是service的端口,前者暴露給集羣內客戶訪問服務,後者暴露給集羣外客戶訪問服務。從這兩個端口到來的數據都需要經過反向代理kube-proxy流入後端pod的targetPod,從而到達pod上的容器內。
8.1.6、service概述
service是pod的路由代理抽象,用於解決pod之間的服務發現問題。因爲pod的運行狀態可動態變化(比如切換機器了、縮容過程中被終止了等),所以訪問端不能以寫死IP的方式去訪問該pod提供的服務。service的引入旨在保證pod的動態變化對訪問端透明,訪問端只需要知道service的地址,由service來提供代理。
8.1.7、replicationController
是pod的複製抽象,用於解決pod的擴容縮容問題。通常,分佈式應用爲了性能或高可用性的考慮,需要複製多份資源,並且根據負載情況動態伸縮。通過replicationController,我們可以指定一個應用需要幾份複製,Kubernetes將爲每份複製創建一個pod,並且保證實際運行pod數量總是與該複製數量相等(例如,當前某個pod宕機時,自動創建新的pod來替換)。
8.1.8、service和replicationController總結
service和replicationController只是建立在pod之上的抽象,最終是要作用於pod的,那麼它們如何跟pod聯繫起來呢?這就要引入label的概念:label其實很好理解,就是爲pod加上可用於搜索或關聯的一組key/value標籤,而service和replicationController正是通過label來與pod關聯的。如下圖所示,有三個pod都有label爲"app=backend",創建service和replicationController時可以指定同樣的label:"app=backend",再通過label selector機制,就將它們與這三個pod關聯起來了。例如,當有其他frontend pod訪問該service時,自動會轉發到其中的一個backend pod。
8.1.9、準備k8s相關的鏡像
在官方的dashboard-deployment.yaml中定義了dashboard所用的鏡像:gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1,啓動k8s的pod還需要一個額外的鏡像:registry.access.redhat.com/rhel7/pod-infrastructure:latest,這兩個鏡像在國內下載比較慢。可以使用docker自帶的源先下載下來:
8.1.9.1、本地上傳鏡像
本地上傳鏡像到node1和node2上
node1和nod2都要導入以下2個鏡像:
本地上傳鏡像到node1和node2上
[root@node1 ~]# mkdir /root/k8s
[root@node2 ~]# mkdir /root/k8s
上傳鏡像文件到node1和node2
[root@node1 k8s]# pwd
/root/k8s
[root@node1 k8s]# ls
docker.io-bestwu-kubernetes-dashboard-amd64-zh.tar
[root@node1 k8s]#
[root@node1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/nginx latest 231d40e811cd 4 weeks ago 126.3 MB
<none> <none> 9e7424e5dbae 2 years ago 108.5 MB
docker.io/mysql/mysql-server latest a3ee341faefb 2 years ago 245.7 MB
registry.access.redhat.com/rhel7/pod-infrastructure latest 1158bd68df6d 2 years ago 208.6 MB
node1上執行導入鏡像操作
[root@node1 k8s]# docker load -i docker.io-bestwu-kubernetes-dashboard-amd64-zh.tar
8fc4262856aa: Loading layer [==================================================>] 139.3 MB/139.3 MB
Loaded image: docker.io/bestwu/kubernetes-dashboard-amd64:v1.6.3
[root@node1 k8s]#
檢查node1上面的鏡像
[root@node1 k8s]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/nginx latest 231d40e811cd 4 weeks ago 126.3 MB
<none> <none> 9e7424e5dbae 2 years ago 108.5 MB
docker.io/mysql/mysql-server latest a3ee341faefb 2 years ago 245.7 MB
registry.access.redhat.com/rhel7/pod-infrastructure latest 1158bd68df6d 2 years ago 208.6 MB
docker.io/bestwu/kubernetes-dashboard-amd64 v1.6.3 691a82db1ecd 2 years ago 139 MB
[root@node1 k8s]#
pod-infrastructure鏡像是之前已經導入的。
將docker.io-bestwu-kubernetes-dashboard-amd64-zh.tar文件傳到node2
[root@node1 k8s]# scp docker.io-bestwu-kubernetes-dashboard-amd64-zh.tar node2:/root/k8s/
root@node2's password:
docker.io-bestwu-kubernetes-dashboard-amd64-zh.tar 100% 133MB 66.6MB/s 00:01
[root@node1 k8s]#
在node2上導入鏡像
[root@node2 k8s]# pwd
/root/k8s
[root@node2 k8s]# ls
docker.io-bestwu-kubernetes-dashboard-amd64-zh.tar
[root@node2 k8s]# docker load -i docker.io-bestwu-kubernetes-dashboard-amd64-zh.tar
8fc4262856aa: Loading layer [==================================================>] 139.3 MB/139.3 MB
Loaded image: docker.io/bestwu/kubernetes-dashboard-amd64:v1.6.3
[root@node2 k8s]#
8.1.9.2、在線pull鏡像
docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
可以直接在node節點上下載該鏡像,這是紅帽的官方docker鏡像下載站點,可以直接訪問。
gcr.io這個網址在國內,沒有辦法訪問,我使用docker.io中的鏡像:
[root@xuegod62 ~]# docker search kubernetes-dashboard-amd
[root@xuegod62 ~]# docker pull docker.io/mritd/kubernetes-dashboard-amd64
這種方式我執行失敗了,不太好用。
8.1.10、上傳yaml文件
在master上
[root@master ~]# mkdir /etc/kubernetes/yaml
[root@master ~]# cd /etc/kubernetes/yaml
[root@master yaml]#
將所有的yaml配置文件都上傳到master的這個/etc/kubernetes/yaml目錄
[root@master yaml]# ls
dashboard-deployment.yaml frontend-deployment.yaml redis-master-deployment.yaml redis-slave-deployment.yaml
dashboard-service.yaml frontend-service.yaml redis-master-service.yaml redis-slave-service.yaml
[root@master yaml]#
飄紅的是需要上傳的yaml文件。
8.1.11、創建deployment
[root@master yaml]# kubectl create -f /etc/kubernetes/yaml/dashboard-deployment.yaml
deployment "kubernetes-dashboard-latest" created
[root@master yaml]#
8.1.12、創建service
[root@master yaml]# kubectl create -f /etc/kubernetes/yaml/dashboard-service.yaml
service "kubernetes-dashboard" created
[root@master yaml]#
到此,dashboard搭建完成。
8.1.13、查看deployment
[root@master yaml]# kubectl get deployment --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
default mysql 1 1 1 1 1d
default nginx 1 1 1 1 1d
kube-system kubernetes-dashboard-latest 1 1 1 1 1m
[root@master yaml]#
desired [dɪ'zaɪəd] 希望,渴望的 ; CURRENT 現在 ; UP-TO-DATE最新的 ; available可用
注: 因爲我們定義了namespace,所以這需要加上--all-namespaces纔可以顯示出來,默認只顯示namespaces=default的deployment。
8.1.14、查看service
[root@master yaml]# kubectl get svc --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 10.254.0.1 <none> 443/TCP 4d
kube-system kubernetes-dashboard 10.254.209.198 <none> 80/TCP 1m
[root@master yaml]#
8.1.15、查看pod
[root@master yaml]# kubectl get pod -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default mysql-2261771434-x62bx 1/1 Running 0 1d 10.255.73.2 node2
default nginx-2187705812-4tmkx 1/1 Running 0 1d 10.255.31.2 node1
kube-system kubernetes-dashboard-latest-810449173-zhc37 1/1 Running 0 3m 10.255.73.3 node2
[root@master yaml]#
8.1.16、查看指定pod
[root@master yaml]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kubernetes-dashboard-latest-810449173-zhc37 1/1 Running 0 4m 10.255.73.3 node2
[root@master yaml]#
8.1.17、排錯經驗
8.1.17.1、kubelet 監聽端口未改
[root@master kubernetes]# kubectl get pod -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system kubernetes-dashboard-latest-2351462241-slg41 0/1 CrashLoopBackOff 5 5m 10.255.9.5 node1
查看報錯信息:
#kubectl logs -f kubernetes-dashboard-latest-810449173-zhc37 -n kube-system #顯示
Error from server:
Get https://node1:10250/containerLogs/kube-system/kubernetes-dashboard-latest-2351462241-slg41/kubernetes-dashboard?follow=true: dial tcp 192.168.2.179:10250: getsockopt: connection refused
注:
解決:
[root@node1 ~]# grep -v '^#' /etc/kubernetes/kubelet
改:KUBELET_ADDRESS="--address=127.0.0.1"
爲:KUBELET_ADDRESS="--address=0.0.0.0"
[root@node1 ~]#systemctl restart kubelet.service
在node2上也同樣執行以上命令。
[root@node2 ~]# grep -v '^#' /etc/kubernetes/kubelet
改:KUBELET_ADDRESS="--address=127.0.0.1"
爲:KUBELET_ADDRESS="--address=0.0.0.0"
[root@node2 ~]#systemctl restart kubelet.service
8.1.17.2、防火牆未關
[root@master ~]# kubectl logs kubernetes-dashboard-latest-2661119796-64km9 -n kube-system
Error from server: Get
https://node2:10250/containerLogs/kube-system/kubernetes-dashboard-latest-2661119796-64km9/kubernetes-dashboard: dial tcp 192.168.2.179:10250: getsockopt: no route to host
解決:
[root@node2 ~]# iptables -F #清空防火牆
[root@node2 ~]# systemctl stop firewalld
[root@node2 ~]# systemctl disable firewalld
清空防火牆後,再查看日誌:
[root@master ~]# kubectl logs kubernetes-dashboard-latest-810449173-zhc37 -n kube-system
Starting HTTP server on port 9090
Creating API server client for http://192.168.2.178:8080
Successful initial request to the apiserver, version: v1.5.2
Creating in-cluster Heapster client
彈出以上界面說明我們配置成功。
8.2、驗證web瀏覽器訪問
到這裏可以對虛機做一個快照,方便後面的實驗。
8.3、銷燬web界面相關應用
8.3.1、刪除deployment
[root@master yaml]# kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mysql 1 1 1 1 1d
nginx 1 1 1 1 1d
[root@master yaml]# kubectl get deployment --namespace=kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kubernetes-dashboard-latest 1 1 1 1 15m
[root@master yaml]#
注意一定要加namespace,不然獲取到的就是default空間的deployment。
刪除web對應的deployment
[root@master yaml]# kubectl delete deployment kubernetes-dashboard-latest --namespace=kube-system
8.3.2、刪除web對應的service
[root@master yaml]# kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard 10.254.209.198 <none> 80/TCP 17m
[root@master yaml]#
[root@master yaml]# kubectl delete svc kubernetes-dashboard --namespace=kube-system