使用kubeadm安装kubernetes及kubernetes的简单使用

1、安装前环境准备工作

(1)环境准备

IP地址

主机名

系统版本

Docker版本

K8s版本

192.168.16.160

master01.dayi123.com

Cent0s7.5

ce版18.09.0

1.13.1

192.168.16.171

node01.dayi123.com

Cent0s7.5

ce版18.09.0

1.13.1

192.168.16.172

node02.dayi123.com

Cent0s7.5

ce版18.09.0

1.13.1

(2)系统环境准备

         1)配置主机host文件

# 在三台主机的/etc/hosts文件中分别添加主机名解析配置
192.168.16.160 master01 master01.dayi123.com
192.168.16.171 node01 node01.dayi123.com
192.168.16.172 node02 node02.dayi123.com

         2)关闭selinux及iptables

# 分别关闭三台主机的selinux及iptables
]# systemctl stop firewalld
]# systemctl disable firewalld
]# setenforce 0
]# sed -i "s#SELINUX=enforcing#SELINUX=disable#g" /etc/selinux/config

         3)配置docker及kubernetes仓库地址

# 在三台主机上分别配置docker仓库地址
~]# cd /etc/yum.repos.d/
~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 在三台主机上分别配置kubernetes仓库地址
~]# cat /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=kubrnetes repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
enabled=1

2、kubernetes master节点的安装

         通过kubeadm安装kubernetes在master节点上只需安装docker-ce、kubeadm及kubernetes组件中的kublet、kubctl组件,其他的组件都被构建成了容器,在使用kubeadm初始化时会加载相应的容器并运行。

(1)安装并启动相应组件

# 安装相关组件
~]# yum install docker-ce kublet kubeadm kubctl
# 启动docker
~]# systemctl daemon-reload
~]# systemctl stop docker
~]# systemctl start docker
~]# systemctl enable kubelet
# 打开iptables转发规则
~]# echo 1 >/proc/sys/net/bridge/bridge-nf-call-iptables 
~]# echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables

(2)使用kubeadm初始化master节点

         在使用kubeadm初始化时相应的参数可以通过” kubeadm init --help”命令查看,主要有以下的选项:

         --kubernetes-version:设置kubernetes版本

         --pod-network-cidr:设置pod网络

         --service-cidr:设置service网络

        --ignore-preflight-errors=Swap:禁用swap

# 关闭swap
]# cat /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
# 初始化master节点
~]# kubeadm init --kubernetes-version=v1.13.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp: lookup www.ik83.io on 114.114.114.114:53: no such host
, error: exit status 1
……
242 packets transmitted, 0 received, 100% packet loss, time 241080ms

         在初始化的过程中,需要从https://k8s.gcr.io/v2/站点下载相关的docker镜像,由于https://k8s.gcr.io/v2/这个站点国内无法访问,所以初始化时会报上面的错误;但是,dockerhub仓库对google这个站点做了镜像,所以,我们可以从dockerhub的镜像站点中去拉去镜像,拉取完成后重新打标签。

# 在docker hub拉去相应的镜像并重新打标
~]# docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1
~]# docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
~]# docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1
~]# docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
~]# docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1
~]# docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
~]# docker pull mirrorgooglecontainers/kube-proxy:v1.13.1
~]# docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
~]# docker pull mirrorgooglecontainers/pause:3.1
~]# docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
~]# docker pull mirrorgooglecontainers/etcd:3.2.24
~]# docker tag mirrorgooglecontainers/etcd:3.2.24  k8s.gcr.io/etcd:3.2.24
~]# docker pull coredns/coredns:1.2.6
~]# docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

         镜像手动拉取完成后,重新初始化master节点。

# 重新初始化
~]# kubeadm init --kubernetes-version=v1.13.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
……
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
  kubeadm join 192.168.16.160:6443 --token p1oy6h.ynakxpzoco505z1h --discovery-token-ca-cert-hash sha256:f85d1778acf19651b092f4ebba09e1ed0d7d4c853999ab54de00f878d61367ac

         初始化完成以后,需要根据提示使用普通用户完成以下的操作:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

(3)安装flannel网络功能

         (flannel的github地址为:https://github.com/coreos/flannel/)

# 安装flannel网络功能,下面的命令执行成功后,会去拉取flannel docker镜像,这个过程会在后台执行,这个过程有点慢。
~]#  kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
# flannel网络安装完成后,master节点主要包含了以下的docker镜像
~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.13.1             fdb321fd30a0        10 days ago         80.2MB
k8s.gcr.io/kube-apiserver            v1.13.1             40a63db91ef8        10 days ago         181MB
k8s.gcr.io/kube-controller-manager   v1.13.1             26e6f1db2a52        10 days ago         146MB
k8s.gcr.io/kube-scheduler            v1.13.1             ab81d7360408        10 days ago         79.6MB
k8s.gcr.io/coredns                   1.2.6               f59dcacceff4        6 weeks ago         40MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        3 months ago        220MB
quay.io/coreos/flannel               v0.10.0-amd64       f0fad859c909        11 months ago       44.6MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        12 months ago       742kB

3、kubernetes node节点的安装

(1)安装kubeadm及启动相应服务

         Master节点安装及初始化完成后,node节点只需要安装docker-ce及kubeadm,并将node节点加入到集群中即可。

# 安装kubadm及docker-ce
~]# yum install docker-ce kublet kubeadm kubctl
# 启动docker
~]# systemctl daemon-reload
~]# systemctl stop docker
~]# systemctl start docker
# 打开iptables装发规则
~]# echo 1 >/proc/sys/net/bridge/bridge-nf-call-iptables 
~]# echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables

(2)初始化node节点并将node节点加入kubernetes集群

         运行命令初始化node节点并将node节点加入集群时,node节点也需要去https://k8s.gcr.io/v2/站点拉取k8s.gcr.io/kube-proxy-amd64、k8s.gcr.io/pause镜像,同时还要拉取quay.io/coreos/flannel镜像;由于https://k8s.gcr.io/v2/无法访问,所以,也需要将在https://k8s.gcr.io/v2/站点拉取的镜像在dockerhub上拉取下来并重新打标签,或者将master节点上的镜像通过”docker load”命令导出,在node节点上通过”docker import”命令导入。

# 拉去相应的镜像并重新打标
~]# docker pull mirrorgooglecontainers/pause:3.1
~]# docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
~]# docker pull mirrorgooglecontainers/kube-proxy:v1.13.1
~]# docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1

         镜像准备好之后,就可以对node节点初始化,并将node节点加入集群中。

# 初始化node节点并将node节点加入集群,下面初始化命令为在master节点初始化完成后生成的命令
~]# kubeadm join 192.168.16.160:6443 --token p1oy6h.ynakxpzoco505z1h --discovery-token-ca-cert-hash sha256:f85d1778acf19651b092f4ebba09e1ed0d7d4c853999ab54de00f878d61367ac --ignore-preflight-errors=Swap

(3)在主节点进行集群的验证

# 查看各节点的状态
~]# kubectl get nodes
NAME                   STATUS   ROLES    AGE     VERSION
master01.dayi123.com   Ready    master   8h      v1.13.1
node01.dayi123.com     Ready    <none>   7h3m    v1.13.1
node02.dayi123.com     Ready    <none>   6h41m   v1.13.1
# 查看kubernetes命名空间
~]# kubectl get ns
NAME          STATUS   AGE
default       Active   8h
kube-public   Active   8h
kube-system   Active   8h
# 查看集群服务状态信息
~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}
# 查看集群中各服务组件的运行状态及详细信息
~]# kubectl get pods -n kube-system -o wide
NAME                                           READY   STATUS    RESTARTS   AGE     IP               NODE                   NOMINATED NODE   READINESS GATES
coredns-86c58d9df4-5kh2t                       1/1     Running   0          6h43m   10.244.0.3       master01.dayi123.com   <none>           <none>
coredns-86c58d9df4-cwwjp                       1/1     Running   0          8h      10.244.0.2       master01.dayi123.com   <none>           <none>
etcd-master01.dayi123.com                      1/1     Running   0          8h      192.168.16.160   master01.dayi123.com   <none>           <none>
kube-apiserver-master01.dayi123.com            1/1     Running   0          8h      192.168.16.160   master01.dayi123.com   <none>           <none>
kube-controller-manager-master01.dayi123.com   1/1     Running   1          8h      192.168.16.160   master01.dayi123.com   <none>           <none>
kube-flannel-ds-amd64-4grnp                    1/1     Running   67         6h48m   192.168.16.171   node01.dayi123.com     <none>           <none>
kube-flannel-ds-amd64-5nz7b                    1/1     Running   0          8h      192.168.16.160   master01.dayi123.com   <none>           <none>
kube-flannel-ds-amd64-hg2f6                    1/1     Running   56         6h26m   192.168.16.172   node02.dayi123.com     <none>           <none>
kube-proxy-65zh5                               1/1     Running   0          6h26m   192.168.16.172   node02.dayi123.com     <none>           <none>
kube-proxy-9cvkd                               1/1     Running   0          6h48m   192.168.16.171   node01.dayi123.com     <none>           <none>
kube-proxy-pqpjt                               1/1     Running   0          8h      192.168.16.160   master01.dayi123.com   <none>           <none>
kube-scheduler-master01.dayi123.com            1/1     Running   1          8h      192.168.16.160   master01.dayi123.com   <none>           <none>
# 查看集群状态信息
~]# kubectl cluster-info
Kubernetes master is running at https://192.168.16.160:6443
KubeDNS is running at https://192.168.16.160:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

4、kubernetes的简单实用

         Kubernetes安装完成后,我们可以在master节点上通过kubectl客户端完成一些简单的操作。

(1)创建pod

         Kubernetes中的最小运行单元为pod, Kubernetes集群安装完成后,就可以创建pod,在pod中运行需要运行的服务。

# 创建一个包含nginx服务的pod并运行为dry-run模式
~]# kubectl run nginx-test --image=nginx:1.14-alpine --port=80 --replicas=1 --dry-run=true kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx-test created (dry run)
#创建一个包含nginx服务的pod并运行
~]# kubectl run nginx-test --image=nginx:1.14-alpine --port=80 --replicas=1
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx-test created
# 可以看到nginx正在创建中
~]# kubectl get pods
NAME                            READY   STATUS              RESTARTS   AGE
nginx-test-5bbfddf46b-bgjtz     0/1     ContainerCreating   0          33s
# 查看创建的deploy
~]# kubectl get deployment
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-test     1/1     1            1           3m20s

         服务所在的pod创建完成并运行起来后,就可以集群内的任何一个节点上访问该服务,但在该外部是无法访问的。

# 创建完成后查看详细信息,获取nginx所在pod的内部IP地址
~]# kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP           NODE                 NOMINATED NODE   READINESS GATES
nginx-test-5bbfddf46b-bgjtz     1/1     Running   0          4m21s   10.244.2.2   node02.dayi123.com   <none>           <none>
# 在kubernetes集群内任意节点访问该nginx服务
~]# curl 10.244.2.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

(2)创建与管理service

         当我们将pod创建完成后,我们访问该pod内的服务只能在集群内部通过pod的的地址去访问该服务;当该pod出现故障后,该pod的控制器会重新创建一个包括该服务的pod,此时访问该服务须要获取该服务所在的新的pod的地址去访问。对此,我们可以创建一个service,当新的pod的创建完成后,service会通过pod的label连接到该服务,我们只需通过service即可访问该服务。

# 删除当前的pod
~]# kubectl delete pods nginx-test-5bbfddf46b-bgjtz
pod "nginx-test-5bbfddf46b-bgjtz" deleted
# 删除pod后,查看pod信息时发现有创建了一个新的pod
~]# kubectl get pod -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP           NODE                 NOMINATED NODE   READINESS GATES
nginx-test-5bbfddf46b-w56l5     1/1     Running   0          34s     10.244.2.3   node02.dayi123.com   <none>           <none>

         service的创建是通过”kubectl expose”命令来创建。该命令的具体用法可以通过” kubectl expose --help”查看。Service创建完成后,通过service地址访问pod中的服务依然只能通过集群内部的地址去访问。

# 创建service,并将包含nginx-test的标签加入进来
~]# kubectl expose deployment nginx-test --name=nginx --port=80 --target-port=80 --protocol=TCP
service/nginx exposed
# 查看创建的service
~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   13h
nginx        ClusterIP   10.110.225.133   <none>        80/TCP    2m49s
# 此时就可以直接通过service地址访问nginx,pod被删除重新创建后,依然可以通过service访问pod中的服务。
~]# curl 10.110.225.133 
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

         Service被创建后,我们可以通过service的名称去访问该service下的pod中的服务,但前提是,需要配置dns地址为core dns服务的地址;新建的pod中的DNS的地址为都为core DNS的地址;我们可以新建一个pod客户端完成测试。

# 查看coredns的地址
~]# kubectl get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   14h
# 新建一个pod客户端
~]# kubectl run client --image=busybox --replicas=1 -it --restart=Never
# 查看pod中容器的dns地址
/ # cat /etc/resolv.conf 
nameserver 10.96.0.10
# 通过名称去访问service
/ # wget -O - -q nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

         不同的service选择不同的pod是通过pod标签来管理的,pod标签是在创建pod时指定的,service管理的标签也是在创建service时指定的。一个service管理的标签及pod的标签都可以通过命令查看。

# 查看名称为nginx的service管理的标签以及其他信息
~]# kubectl describe svc nginx 
Name:              nginx
Namespace:         default
Labels:            run=nginx-test
Annotations:       <none>
Selector:          run=nginx-test
Type:              ClusterIP
IP:                10.110.225.133
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.2.3:80
Session Affinity:  None
Events:            <none>
# 查看pod的标签
 ~]# kubectl get pods --show-labels
NAME                            READY   STATUS    RESTARTS   AGE   LABELS
client                          1/1     Running   0          21m   run=client
nginx-test-5bbfddf46b-w56l5     1/1     Running   0          41m   pod-template-hash=5bbfddf46b,run=nginx-test

         coredns服务对service名称的解析是实时的,在service被重新创建后或者修改service的ip地址后,依然可以通过service名称访问pod中的服务。

# 删除并重新创建一个名称为nginx的service
~]# kubectl delete svc nginx
service "nginx" deleted
~]# kubectl expose deployment nginx-test --name=nginx
service/nginx exposed
# 获取新创建的service的IP地址
~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   14h
nginx        ClusterIP   10.98.192.150   <none>        80/TCP    9s

(3)pod的扩展与缩减

         Pod创建完成后,当服务的访问量过大时,可以对pod的进行扩展让pod中的服务处理更多的请求;当访问量减小时,可以缩减pod数量,以节约资源。 这些操作都可以在线完成,并不会影响现有的服务。

# 扩展pod数量
~]# kubectl scale --replicas=5 deployment nginx-test
deployment.extensions/nginx-test scaled
# 查看扩展后的pod
~]# kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
client                          1/1     Running   0          59m
nginx-test-5bbfddf46b-6kw49     1/1     Running   0          44s
nginx-test-5bbfddf46b-k6jh7     1/1     Running   0          44s
nginx-test-5bbfddf46b-pswmp     1/1     Running   1          9m19s
nginx-test-5bbfddf46b-w56l5     1/1     Running   1          79m
nginx-test-5bbfddf46b-wwtwz     1/1     Running   0          44s
# 缩减pod的数量为2个
~]# kubectl scale --replicas=2 deployment nginx-test 
deployment.extensions/nginx-test scaled

(4)服务的在线升级与回滚

         在kubernetes服务中部署完服务后,对服务的升级可以在线完成,升级出问题后,也可以在线完成回滚。

# 查看pod的名称及pod详细信息
~]# kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
nginx-test-5bbfddf46b-6kw49     1/1     Running   0          32m
……
# 查看pod详细信息
~]# kubectl describe pods nginx-test-5bbfddf46b-6kw49
Name:               nginx-test-5bbfddf46b-6kw49
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               node02.dayi123.com/192.168.16.172
Start Time:         Tue, 25 Dec 2018 15:59:35 +0800
Labels:             pod-template-hash=5bbfddf46b
                    run=nginx-test
Annotations:        <none>
Status:             Running
IP:                 10.244.2.8
Controlled By:      ReplicaSet/nginx-test-5bbfddf46b
Containers:
  nginx-test:
    Container ID:   docker://5537c32a16b1dea8104b32379f1174585e287bf0e44f8a1d6c4bd036d5b1dfba
    Image:          nginx:1.14-alpine
……
# 为了验证更加明显,更新时将nginx替换为httpd服务
~]# kubectl set image deployment nginx-test nginx-test=httpd:2.4-alpine
deployment.extensions/nginx-test image updated
# 实时查看更新过程
~]# kubectl get deployment -w
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-test     4/5     5            4           5h36m
nginx-test   3/5   5     3     5h37m
nginx-test   4/5   5     4     5h38m
nginx-test   5/5   5     5     5h38m
nginx-test   5/5   5     5     5h38m
nginx-test   4/5   5     4     5h38m
nginx-test   5/5   5     5     5h38m
# 更新完成后在客户端验证
/ # wget  -O - -q nginx
<html><body><h1>It works!</h1></body></html>
# 通过kubernetes节点验证
~]# curl 10.98.192.150
<html><body><h1>It works!</h1></body></html>
# 更新后回滚为原来的nginx
~]# kubectl rollout undo deployment nginx-test
deployment.extensions/nginx-test rolled back
# 实时查看回滚的进度
~]# kubectl get deployment -w                 
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-test     4/5     5            4           5h48m
nginx-test   5/5   5     5     5h48m
nginx-test   5/5   5     5     5h48m
nginx-test   4/5   5     4     5h48m
. . . . .
# 回滚完成后验证
~]# curl 10.98.192.150
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

(5)让节点外部客户能够通过service访问pod中服务

         创建好pod及service后,无论是通过pod地址及service地址在集群外部都无法访问pod中的服务;如果想要在集群外部访问pod中的服务,需要修改service的类型为NodePort,修改后会自动在ipvs中添加nat规则,此时就可以通过node节点地址访问pod中的服务。

# 编辑配置文件
~]# kubectl edit svc nginx
. . . . . .
spec:
  clusterIP: 10.98.192.150
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx-test
  sessionAffinity: None
  type: NodePort
# 配置完成后查看node节点监听的端口
~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      1703/kubelet        
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      100485/kube-proxy   
tcp        0      0 127.0.0.1:41101         0.0.0.0:*               LISTEN      1703/kubelet        
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      849/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      937/master          
tcp6       0      0 :::10250                :::*                    LISTEN      1703/kubelet        
tcp6       0      0 :::31438                :::*                    LISTEN      100485/kube-proxy   

         修改完配置后,查看node节点监听的端口发现多了31438端口,在外部可以通过node节点的地址及该端口访问pod内的服务。

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章