1、安裝前環境準備工作
(1)環境準備
IP地址 |
主機名 |
系統版本 |
Docker版本 |
K8s版本 |
192.168.16.160 |
master01.dayi123.com |
Cent0s7.5 |
ce版18.09.0 |
1.13.1 |
192.168.16.171 |
node01.dayi123.com |
Cent0s7.5 |
ce版18.09.0 |
1.13.1 |
192.168.16.172 |
node02.dayi123.com |
Cent0s7.5 |
ce版18.09.0 |
1.13.1 |
(2)系統環境準備
1)配置主機host文件
# 在三臺主機的/etc/hosts文件中分別添加主機名解析配置
192.168.16.160 master01 master01.dayi123.com
192.168.16.171 node01 node01.dayi123.com
192.168.16.172 node02 node02.dayi123.com
2)關閉selinux及iptables
# 分別關閉三臺主機的selinux及iptables
]# systemctl stop firewalld
]# systemctl disable firewalld
]# setenforce 0
]# sed -i "s#SELINUX=enforcing#SELINUX=disable#g" /etc/selinux/config
3)配置docker及kubernetes倉庫地址
# 在三臺主機上分別配置docker倉庫地址
~]# cd /etc/yum.repos.d/
~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 在三臺主機上分別配置kubernetes倉庫地址
~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=kubrnetes repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
enabled=1
2、kubernetes master節點的安裝
通過kubeadm安裝kubernetes在master節點上只需安裝docker-ce、kubeadm及kubernetes組件中的kublet、kubctl組件,其他的組件都被構建成了容器,在使用kubeadm初始化時會加載相應的容器並運行。
(1)安裝並啓動相應組件
# 安裝相關組件
~]# yum install docker-ce kublet kubeadm kubctl
# 啓動docker
~]# systemctl daemon-reload
~]# systemctl stop docker
~]# systemctl start docker
~]# systemctl enable kubelet
# 打開iptables轉發規則
~]# echo 1 >/proc/sys/net/bridge/bridge-nf-call-iptables
~]# echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables
(2)使用kubeadm初始化master節點
在使用kubeadm初始化時相應的參數可以通過” kubeadm init --help”命令查看,主要有以下的選項:
--kubernetes-version:設置kubernetes版本
--pod-network-cidr:設置pod網絡
--service-cidr:設置service網絡
--ignore-preflight-errors=Swap:禁用swap
# 關閉swap
]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
# 初始化master節點
~]# kubeadm init --kubernetes-version=v1.13.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp: lookup www.ik83.io on 114.114.114.114:53: no such host
, error: exit status 1
……
242 packets transmitted, 0 received, 100% packet loss, time 241080ms
在初始化的過程中,需要從https://k8s.gcr.io/v2/站點下載相關的docker鏡像,由於https://k8s.gcr.io/v2/這個站點國內無法訪問,所以初始化時會報上面的錯誤;但是,dockerhub倉庫對google這個站點做了鏡像,所以,我們可以從dockerhub的鏡像站點中去拉去鏡像,拉取完成後重新打標籤。
# 在docker hub拉去相應的鏡像並重新打標
~]# docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1
~]# docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
~]# docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1
~]# docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
~]# docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1
~]# docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
~]# docker pull mirrorgooglecontainers/kube-proxy:v1.13.1
~]# docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
~]# docker pull mirrorgooglecontainers/pause:3.1
~]# docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
~]# docker pull mirrorgooglecontainers/etcd:3.2.24
~]# docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
~]# docker pull coredns/coredns:1.2.6
~]# docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
鏡像手動拉取完成後,重新初始化master節點。
# 重新初始化
~]# kubeadm init --kubernetes-version=v1.13.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
……
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.16.160:6443 --token p1oy6h.ynakxpzoco505z1h --discovery-token-ca-cert-hash sha256:f85d1778acf19651b092f4ebba09e1ed0d7d4c853999ab54de00f878d61367ac
初始化完成以後,需要根據提示使用普通用戶完成以下的操作:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
(3)安裝flannel網絡功能
(flannel的github地址爲:https://github.com/coreos/flannel/)
# 安裝flannel網絡功能,下面的命令執行成功後,會去拉取flannel docker鏡像,這個過程會在後臺執行,這個過程有點慢。
~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
# flannel網絡安裝完成後,master節點主要包含了以下的docker鏡像
~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.13.1 fdb321fd30a0 10 days ago 80.2MB
k8s.gcr.io/kube-apiserver v1.13.1 40a63db91ef8 10 days ago 181MB
k8s.gcr.io/kube-controller-manager v1.13.1 26e6f1db2a52 10 days ago 146MB
k8s.gcr.io/kube-scheduler v1.13.1 ab81d7360408 10 days ago 79.6MB
k8s.gcr.io/coredns 1.2.6 f59dcacceff4 6 weeks ago 40MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 3 months ago 220MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 11 months ago 44.6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 12 months ago 742kB
3、kubernetes node節點的安裝
(1)安裝kubeadm及啓動相應服務
Master節點安裝及初始化完成後,node節點只需要安裝docker-ce及kubeadm,並將node節點加入到集羣中即可。
# 安裝kubadm及docker-ce
~]# yum install docker-ce kublet kubeadm kubctl
# 啓動docker
~]# systemctl daemon-reload
~]# systemctl stop docker
~]# systemctl start docker
# 打開iptables裝發規則
~]# echo 1 >/proc/sys/net/bridge/bridge-nf-call-iptables
~]# echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables
(2)初始化node節點並將node節點加入kubernetes集羣
運行命令初始化node節點並將node節點加入集羣時,node節點也需要去https://k8s.gcr.io/v2/站點拉取k8s.gcr.io/kube-proxy-amd64、k8s.gcr.io/pause鏡像,同時還要拉取quay.io/coreos/flannel鏡像;由於https://k8s.gcr.io/v2/無法訪問,所以,也需要將在https://k8s.gcr.io/v2/站點拉取的鏡像在dockerhub上拉取下來並重新打標籤,或者將master節點上的鏡像通過”docker load”命令導出,在node節點上通過”docker import”命令導入。
# 拉去相應的鏡像並重新打標
~]# docker pull mirrorgooglecontainers/pause:3.1
~]# docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
~]# docker pull mirrorgooglecontainers/kube-proxy:v1.13.1
~]# docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
鏡像準備好之後,就可以對node節點初始化,並將node節點加入集羣中。
# 初始化node節點並將node節點加入集羣,下面初始化命令爲在master節點初始化完成後生成的命令
~]# kubeadm join 192.168.16.160:6443 --token p1oy6h.ynakxpzoco505z1h --discovery-token-ca-cert-hash sha256:f85d1778acf19651b092f4ebba09e1ed0d7d4c853999ab54de00f878d61367ac --ignore-preflight-errors=Swap
(3)在主節點進行集羣的驗證
# 查看各節點的狀態
~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01.dayi123.com Ready master 8h v1.13.1
node01.dayi123.com Ready <none> 7h3m v1.13.1
node02.dayi123.com Ready <none> 6h41m v1.13.1
# 查看kubernetes命名空間
~]# kubectl get ns
NAME STATUS AGE
default Active 8h
kube-public Active 8h
kube-system Active 8h
# 查看集羣服務狀態信息
~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
# 查看集羣中各服務組件的運行狀態及詳細信息
~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-86c58d9df4-5kh2t 1/1 Running 0 6h43m 10.244.0.3 master01.dayi123.com <none> <none>
coredns-86c58d9df4-cwwjp 1/1 Running 0 8h 10.244.0.2 master01.dayi123.com <none> <none>
etcd-master01.dayi123.com 1/1 Running 0 8h 192.168.16.160 master01.dayi123.com <none> <none>
kube-apiserver-master01.dayi123.com 1/1 Running 0 8h 192.168.16.160 master01.dayi123.com <none> <none>
kube-controller-manager-master01.dayi123.com 1/1 Running 1 8h 192.168.16.160 master01.dayi123.com <none> <none>
kube-flannel-ds-amd64-4grnp 1/1 Running 67 6h48m 192.168.16.171 node01.dayi123.com <none> <none>
kube-flannel-ds-amd64-5nz7b 1/1 Running 0 8h 192.168.16.160 master01.dayi123.com <none> <none>
kube-flannel-ds-amd64-hg2f6 1/1 Running 56 6h26m 192.168.16.172 node02.dayi123.com <none> <none>
kube-proxy-65zh5 1/1 Running 0 6h26m 192.168.16.172 node02.dayi123.com <none> <none>
kube-proxy-9cvkd 1/1 Running 0 6h48m 192.168.16.171 node01.dayi123.com <none> <none>
kube-proxy-pqpjt 1/1 Running 0 8h 192.168.16.160 master01.dayi123.com <none> <none>
kube-scheduler-master01.dayi123.com 1/1 Running 1 8h 192.168.16.160 master01.dayi123.com <none> <none>
# 查看集羣狀態信息
~]# kubectl cluster-info
Kubernetes master is running at https://192.168.16.160:6443
KubeDNS is running at https://192.168.16.160:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
4、kubernetes的簡單實用
Kubernetes安裝完成後,我們可以在master節點上通過kubectl客戶端完成一些簡單的操作。
(1)創建pod
Kubernetes中的最小運行單元爲pod, Kubernetes集羣安裝完成後,就可以創建pod,在pod中運行需要運行的服務。
# 創建一個包含nginx服務的pod並運行爲dry-run模式
~]# kubectl run nginx-test --image=nginx:1.14-alpine --port=80 --replicas=1 --dry-run=true kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx-test created (dry run)
#創建一個包含nginx服務的pod並運行
~]# kubectl run nginx-test --image=nginx:1.14-alpine --port=80 --replicas=1
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx-test created
# 可以看到nginx正在創建中
~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-test-5bbfddf46b-bgjtz 0/1 ContainerCreating 0 33s
# 查看創建的deploy
~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-test 1/1 1 1 3m20s
服務所在的pod創建完成並運行起來後,就可以集羣內的任何一個節點上訪問該服務,但在該外部是無法訪問的。
# 創建完成後查看詳細信息,獲取nginx所在pod的內部IP地址
~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-test-5bbfddf46b-bgjtz 1/1 Running 0 4m21s 10.244.2.2 node02.dayi123.com <none> <none>
# 在kubernetes集羣內任意節點訪問該nginx服務
~]# curl 10.244.2.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
(2)創建與管理service
當我們將pod創建完成後,我們訪問該pod內的服務只能在集羣內部通過pod的的地址去訪問該服務;當該pod出現故障後,該pod的控制器會重新創建一個包括該服務的pod,此時訪問該服務須要獲取該服務所在的新的pod的地址去訪問。對此,我們可以創建一個service,當新的pod的創建完成後,service會通過pod的label連接到該服務,我們只需通過service即可訪問該服務。
# 刪除當前的pod
~]# kubectl delete pods nginx-test-5bbfddf46b-bgjtz
pod "nginx-test-5bbfddf46b-bgjtz" deleted
# 刪除pod後,查看pod信息時發現有創建了一個新的pod
~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-test-5bbfddf46b-w56l5 1/1 Running 0 34s 10.244.2.3 node02.dayi123.com <none> <none>
service的創建是通過”kubectl expose”命令來創建。該命令的具體用法可以通過” kubectl expose --help”查看。Service創建完成後,通過service地址訪問pod中的服務依然只能通過集羣內部的地址去訪問。
# 創建service,並將包含nginx-test的標籤加入進來
~]# kubectl expose deployment nginx-test --name=nginx --port=80 --target-port=80 --protocol=TCP
service/nginx exposed
# 查看創建的service
~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13h
nginx ClusterIP 10.110.225.133 <none> 80/TCP 2m49s
# 此時就可以直接通過service地址訪問nginx,pod被刪除重新創建後,依然可以通過service訪問pod中的服務。
~]# curl 10.110.225.133
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
Service被創建後,我們可以通過service的名稱去訪問該service下的pod中的服務,但前提是,需要配置dns地址爲core dns服務的地址;新建的pod中的DNS的地址爲都爲core DNS的地址;我們可以新建一個pod客戶端完成測試。
# 查看coredns的地址
~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 14h
# 新建一個pod客戶端
~]# kubectl run client --image=busybox --replicas=1 -it --restart=Never
# 查看pod中容器的dns地址
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
# 通過名稱去訪問service
/ # wget -O - -q nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
不同的service選擇不同的pod是通過pod標籤來管理的,pod標籤是在創建pod時指定的,service管理的標籤也是在創建service時指定的。一個service管理的標籤及pod的標籤都可以通過命令查看。
# 查看名稱爲nginx的service管理的標籤以及其他信息
~]# kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: run=nginx-test
Annotations: <none>
Selector: run=nginx-test
Type: ClusterIP
IP: 10.110.225.133
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.2.3:80
Session Affinity: None
Events: <none>
# 查看pod的標籤
~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
client 1/1 Running 0 21m run=client
nginx-test-5bbfddf46b-w56l5 1/1 Running 0 41m pod-template-hash=5bbfddf46b,run=nginx-test
coredns服務對service名稱的解析是實時的,在service被重新創建後或者修改service的ip地址後,依然可以通過service名稱訪問pod中的服務。
# 刪除並重新創建一個名稱爲nginx的service
~]# kubectl delete svc nginx
service "nginx" deleted
~]# kubectl expose deployment nginx-test --name=nginx
service/nginx exposed
# 獲取新創建的service的IP地址
~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h
nginx ClusterIP 10.98.192.150 <none> 80/TCP 9s
(3)pod的擴展與縮減
Pod創建完成後,當服務的訪問量過大時,可以對pod的進行擴展讓pod中的服務處理更多的請求;當訪問量減小時,可以縮減pod數量,以節約資源。 這些操作都可以在線完成,並不會影響現有的服務。
# 擴展pod數量
~]# kubectl scale --replicas=5 deployment nginx-test
deployment.extensions/nginx-test scaled
# 查看擴展後的pod
~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 59m
nginx-test-5bbfddf46b-6kw49 1/1 Running 0 44s
nginx-test-5bbfddf46b-k6jh7 1/1 Running 0 44s
nginx-test-5bbfddf46b-pswmp 1/1 Running 1 9m19s
nginx-test-5bbfddf46b-w56l5 1/1 Running 1 79m
nginx-test-5bbfddf46b-wwtwz 1/1 Running 0 44s
# 縮減pod的數量爲2個
~]# kubectl scale --replicas=2 deployment nginx-test
deployment.extensions/nginx-test scaled
(4)服務的在線升級與回滾
在kubernetes服務中部署完服務後,對服務的升級可以在線完成,升級出問題後,也可以在線完成回滾。
# 查看pod的名稱及pod詳細信息
~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-test-5bbfddf46b-6kw49 1/1 Running 0 32m
……
# 查看pod詳細信息
~]# kubectl describe pods nginx-test-5bbfddf46b-6kw49
Name: nginx-test-5bbfddf46b-6kw49
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: node02.dayi123.com/192.168.16.172
Start Time: Tue, 25 Dec 2018 15:59:35 +0800
Labels: pod-template-hash=5bbfddf46b
run=nginx-test
Annotations: <none>
Status: Running
IP: 10.244.2.8
Controlled By: ReplicaSet/nginx-test-5bbfddf46b
Containers:
nginx-test:
Container ID: docker://5537c32a16b1dea8104b32379f1174585e287bf0e44f8a1d6c4bd036d5b1dfba
Image: nginx:1.14-alpine
……
# 爲了驗證更加明顯,更新時將nginx替換爲httpd服務
~]# kubectl set image deployment nginx-test nginx-test=httpd:2.4-alpine
deployment.extensions/nginx-test image updated
# 實時查看更新過程
~]# kubectl get deployment -w
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-test 4/5 5 4 5h36m
nginx-test 3/5 5 3 5h37m
nginx-test 4/5 5 4 5h38m
nginx-test 5/5 5 5 5h38m
nginx-test 5/5 5 5 5h38m
nginx-test 4/5 5 4 5h38m
nginx-test 5/5 5 5 5h38m
# 更新完成後在客戶端驗證
/ # wget -O - -q nginx
<html><body><h1>It works!</h1></body></html>
# 通過kubernetes節點驗證
~]# curl 10.98.192.150
<html><body><h1>It works!</h1></body></html>
# 更新後回滾爲原來的nginx
~]# kubectl rollout undo deployment nginx-test
deployment.extensions/nginx-test rolled back
# 實時查看回滾的進度
~]# kubectl get deployment -w
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-test 4/5 5 4 5h48m
nginx-test 5/5 5 5 5h48m
nginx-test 5/5 5 5 5h48m
nginx-test 4/5 5 4 5h48m
. . . . .
# 回滾完成後驗證
~]# curl 10.98.192.150
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
(5)讓節點外部客戶能夠通過service訪問pod中服務
創建好pod及service後,無論是通過pod地址及service地址在集羣外部都無法訪問pod中的服務;如果想要在集羣外部訪問pod中的服務,需要修改service的類型爲NodePort,修改後會自動在ipvs中添加nat規則,此時就可以通過node節點地址訪問pod中的服務。
# 編輯配置文件
~]# kubectl edit svc nginx
. . . . . .
spec:
clusterIP: 10.98.192.150
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx-test
sessionAffinity: None
type: NodePort
# 配置完成後查看node節點監聽的端口
~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1703/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 100485/kube-proxy
tcp 0 0 127.0.0.1:41101 0.0.0.0:* LISTEN 1703/kubelet
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 849/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 937/master
tcp6 0 0 :::10250 :::* LISTEN 1703/kubelet
tcp6 0 0 :::31438 :::* LISTEN 100485/kube-proxy
修改完配置後,查看node節點監聽的端口發現多了31438端口,在外部可以通過node節點的地址及該端口訪問pod內的服務。