理论+实操:K8S搭建dns内部服务与控制器controlls五种模式

故障排查

状态值 描述
pending pod创建已经提交到kubernetes;但是,因为某些原因而不能顺利创建——例如下载镜像慢、调度不成功
running pod已经绑定到一个节点,并且已经创建了所有容器;至少有一个容器正在运行中,或正在启动或重新启动
succeeded/competed pod中的所有容器都已成功终止,不会重新启动
failed pod的所有容器均已终止,且至少有一个容器已在故障中终止;也就是说,容器要么以非零状态退出,要么被系统终止
unknown 由于某种原因apiserver无法获得pod的状态,通常是由于master与pod所在主机kubelet通信时出错

解决思路:

  • 查看pod事件

kubectl describe type name_prefix

  • 查看pod日志(failed状态下重点看)

kubectl logs pod_name

  • 进入pod(状态为running,但是服务没有提供的情况下)

kubectl exec -it pod_name bash

一:控制器的类型

控制器:又称为工作负载,分别包含以下类型控制器

1.deployment 无状态化部署控制器

2.statefulset 有状态化部署

3.daemonset 一次部署,所有的节点都会部署

4.job 一次性任务

5.cronjob 周期性任务

一: pod与控制器之间的关系

controllers:在集群上管理和运行容器的对象,通过labels-selector相关联

pod通过控制器实现应用的运维,比如伸缩,升级等

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-fGipEOsF-1589695960249)(G:\GSY\Documents\typora图片\1589502342922.png)]

二:deployment

部署无状态应用

应用场景:web服务,比如tomcat

2.1 deployment概述

  • deployment管理pod和replicaset
  • 具有上线部署、副本设定、滚动升级回滚等功能
  • 提供声明式更新,例如只更新一个新的image

2.1 演示

2.1.1 编写yaml文件

[root@master1 gsy]# vim nginx-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx1
        image: nginx:1.15.4
        ports:
        - containerPort: 80

2.1.2 创建pod

[root@master1 gsy]# kubectl create -f nginx-deployment.yaml 
deployment.apps/nginx-deployment created
[root@master1 gsy]# kubectl get pods -w
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-78cdb5b557-c2cwv   1/1     Running   0          29s
nginx-deployment-78cdb5b557-d6wlp   1/1     Running   0          29s
nginx-deployment-78cdb5b557-lt52n   1/1     Running   0          29s
pod9                                1/1     Running   0          41h
^C[root@master1 gsy]# kubectl get all
NAME                                    READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-78cdb5b557-c2cwv   1/1     Running   0          14m
pod/nginx-deployment-78cdb5b557-d6wlp   1/1     Running   0          14m
pod/nginx-deployment-78cdb5b557-lt52n   1/1     Running   0          14m
pod/pod9                                1/1     Running   0          42h

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   14d

NAME                               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   3         3         3            3           14m

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-78cdb5b557   3         3         3       14m

2.1.3 查看控制器参数

参数有两种查看,一种是describe,还有一种是edit编辑的方式

通常来说,edit会更详细一点

[root@master1 gsy]# kubectl describe deploy nginx-deployment
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Fri, 15 May 2020 08:42:18 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx1:
    Image:        nginx:1.15.4
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-deployment-78cdb5b557 (3/3 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  17m   deployment-controller  Scaled up replica set nginx-deployment-78cdb5b557 to 3
[root@master1 gsy]# kubectl edit deploy nginx-deployment
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2020-05-15T00:42:18Z
  generation: 1
  labels:
    app: nginx
  name: nginx-deployment
  namespace: default
  resourceVersion: "897701"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx-deployment
  uid: ee238868-9644-11ea-a668-000c29db840b
'spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:	#匹配标签
      app: nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate'
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
'    spec:
      containers:
      - image: nginx:1.15.4
        imagePullPolicy: IfNotPresent
        name: nginx1
        ports:
        - containerPort: 80
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 3
  conditions:
  - lastTransitionTime: 2020-05-15T00:42:30Z
    lastUpdateTime: 2020-05-15T00:42:30Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: 2020-05-15T00:42:18Z
    lastUpdateTime: 2020-05-15T00:42:30Z
    message: ReplicaSet "nginx-deployment-78cdb5b557" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
 observedGeneration: 1
  readyReplicas: 3
  replicas: 3
  updatedReplicas: 3

滚动更新机制:先创建新资源,再关闭旧资源,一个一个替换

  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate'

matchlabels:匹配标签

maxsurge 最多扩容容量是百分之25

maxunavailable 最大缩减的数量不能超过25

2.1.4 查看控制器历史版本

回滚机制就是依此执行

[root@master1 gsy]#  kubectl rollout history deployment/nginx-deployment
deployment.extensions/nginx-deployment 
REVISION  CHANGE-CAUSE
1         <none>

三:statefulset

部署有状态应用

解决pod独立生命周期,保持pod启动顺序和唯一性

稳定,唯一的网络标识符,持久存储(例如:etcd配置文件,节点地址会发生变化,将无法正常使用)

有序部署,可以稳定的部署、扩展、删除和终止(例如:mysql主从关系,先启动主,在启动从)

有序,滚动更新

应用场景:数据库等独立性服务

官方文档:https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/

3.1 deployment和statefulset区别

  • 无状态

1.deployment控制器认为所有的pod都是一样的

2.不考虑pod开启顺序的要求

3.不考虑在哪个node节点上运行

4.可以随意扩容和缩容

  • 有状态

1.实例之间有差别,每个实例都有自己的独特性,元数据不同,例如:etcd、zookeeper

zookeeper微服务用的多

2.实例之间不对等的关系,以及依靠外部存储的应用

3.2 常规service和无头服务的区别

常规service:是一组pod的访问策略,提供cluester-ip群集之间通讯,还提供负载均衡和服务发现

无头服务headless service:无头服务,不需要cluster-ip,直接绑定具体的pod的IP

无头服务通常用于statefulset有状态部署

3.3 先创建之前deployment控制器pod的常规服务

[root@master1 gsy]# vim nginx-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx

cluster-IP实现内部群集通信地址

[root@master1 gsy]# kubectl create -f nginx-service.yaml 
service/nginx-service created
[root@master1 gsy]# kubectl get service
NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes      ClusterIP   10.0.0.1     <none>        443/TCP        14d
nginx-service   NodePort    10.0.0.31    <none>        80:37384/TCP   11s

3.3 在node节点去测试集群内部通讯

[root@node01 ~]# curl 10.0.0.31
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

有一个节点发现不可以访问,重启一下flannel和docker

[root@node02 ~]#  curl 10.0.0.31
^C
[root@node02 ~]# systemctl restart flanneld.service
[root@node02 ~]# systemctl restart docker
[root@node02 system]#  curl 10.0.0.31
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

然后另外一个node1节点发现又不可以了,

[root@node01 ~]# curl 10.0.0.31
^C
[root@node01 ~]# 

3.4 创建headless无头服务

因为statefulset控制器创建的pod资源是一个动态的IP地址,ip地址不同,会影响服务的正常使用

所以常用于绑定dns访问,这就要安装dns服务

创建statefulset的pod资源

无头服务:直接绑定podIP提供服务

headless service 无头服务,不需要cluster-IP,直接绑定具体的POD的IP

[root@master1 gsy]# vim headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-headless
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None	#这个是无头服务的核心
  selector:
    app: nginx
[root@master1 gsy]# kubectl apply -f headless.yaml 
service/nginx-headless created
[root@master1 gsy]# kubectl get service
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP        14d
nginx-headless   ClusterIP   None         <none>        80/TCP         4s
nginx-service    NodePort    10.0.0.31    <none>        80:37384/TCP   14m

3.5 查看服务,出现一个没有cluster-ip的服务

[root@master1 ~]# kubectl get svc
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP        16d
nginx-headless   ClusterIP   None         <none>        80/TCP         45h
nginx-service    NodePort    10.0.0.31    <none>        80:37384/TCP   46h

3.6 开始做dns服务

在集群中定义的每个 Service(包括 DNS 服务器自身)都会被指派一个 DNS 名称。 默认,一个客户端 Pod 的 DNS 搜索列表将包含该 Pod 自己的 Namespace 和集群默认域。

假设在 Kubernetes 集群的 Namespace bar 中,定义了一个Service foo。 运行在Namespace bar 中的一个 Pod,可以简单地通过 DNS 查询 foo 来找到该 Service。 运行在 Namespace quux 中的一个 Pod 可以通过 DNS 查询 foo.bar 找到该 Service。

从 Kubernetes v1.12 开始,CoreDNS 是推荐的 DNS 服务器,取代了kube-dns。 但是,默认情况下,某些 Kubernetes 安装程序工具仍可能安装 kube-dns。 请参阅安装程序提供的文档,以了解默认情况下安装了哪个 DNS 服务器。

CoreDNS 的部署,作为一个 Kubernetes 服务,通过静态 IP 的方式暴露。 CoreDNS 和 kube-dns 服务在 metadata.name 字段中均被命名为 kube-dns。 这样做是为了与依靠传统 kube-dns 服务名称来解析集群内部地址的工作负载具有更大的互操作性。它抽象出哪个 DNS 提供程序在该公共端点后面运行的实现细节。

kubelet 使用 --cluster-dns = <dns-service-ip> 标志将 DNS 传递到每个容器。

如果 Pod 的 dnsPolicy 设置为 “default”,则它将从 Pod 运行所在节点上的配置中继承名称解析配置。 Pod 的 DNS 解析应该与节点相同。

如果您不想这样做,或者想要为 Pod 使用其他 DNS 配置,则可以 使用 kubelet 的 --resolv-conf 标志。 将此标志设置为 "” 以避免 Pod 继承 DNS。 将其设置为有效的文件路径以指定除以下以外的文件 /etc/resolv.conf,用于 DNS 继承

3.7 创建dns的服务

[root@master1 ~]# cp /abc/k8s/coredns.yaml .
[root@master1 ~]# cat coredns.yaml	#这个

apiVersion: v1
kind: ServiceAccount	#为Pod中的进程和外部用户提供身份信息,系统账户
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole	#ClusterRole对象可以授予整个集群范围内资源访问权限
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding	#RoleBinding可以将同一namespace中的subject(用户)
							绑定到某个具有特定权限的Role下,则此subject即具有该Role定义的权限。
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
#CoreDNS 是模块化且可插拔的 DNS 服务器,每个插件都为 CoreDNS 添加了新功能
#可以通过维护Corefile,即CoreDNS 配置文件。 
#集群管理员可以修改 CoreDNS Corefile 的 ConfigMap,以更改服务发现的工作方式。
#在 Kubernetes 中,已经使用以下默认 Corefile 配置安装了 CoreDNS。
Corefile 配置包括以下 CoreDNS 的 插件:
错误:错误记录到 stdout。
健康:CoreDNS 的健康报告给 http://localhost:8080/health。
kubernetes:CoreDNS 将基于 Kubernetes 的服务和 Pod 的 IP 答复 DNS 查询。

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure	#与kube-dns向前兼容
#可以使用 pods verified 选项,该选项仅在相同名称空间中存在具有匹配 IP 的 pod 时才返回 A 记录。 
#如果您不使用 Pod 记录,则可以使用 pods disabled 选项。
            upstream	#用来解析指向外部主机的服务
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        #CoreDNS的度量标准以Prometheus格式在 http://localhost:9153/metrics 上提供。
        proxy . /etc/resolv.conf
        #不在 Kubernetes 集群域内的任何查询都将转发到预定义的解析器 (/etc/resolv.conf).
        cache 30
        #启用前端缓存
        loop
        #检测到简单的转发循环,如果发现死循环,则中止 CoreDNS 进程。
        reload
        #允许自动重新加载已更改的 Corefile。 编辑 ConfigMap 配置后,请等待两分钟,以使更改生效。
        loadbalance
        #轮询 DNS 负载均衡器,它在应答中随机分配 A,AAAA 和 MX 记录的顺序。
    }
#修改 ConfigMap 来修改默认的 CoreDNS 行为。
CoreDNS 能够使用 proxy plugin. 配置存根域和上游域名服务器。
使用 CoreDN 配置存根域和上游域名服务器见官方文档

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      serviceAccountName: coredns
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
      - name: coredns
        image: coredns/coredns:1.2.2
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.0.0.2 
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
[root@master1 ~]# ls -l 
total 64
-rw-r--r--. 1 root root  191 May 13 14:45 1nodeselector.yaml
-rw-------. 1 root root 1935 Apr 30 08:53 anaconda-ks.cfg
-rwxr-xr-x. 1 root root 4290 May 17 08:49 coredns.yaml

3.8 查看创建后的状态

[root@master1 ~]# kubectl create -f coredns.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.extensions/coredns created
service/kube-dns created
[root@master1 ~]# kubectl get all -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
pod/coredns-56684f94d6-qfkf7                1/1     Running   0          24s
pod/kubernetes-dashboard-7dffbccd68-l4tcd   1/1     Running   2          8d

NAME                           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/kube-dns               ClusterIP   10.0.0.2     <none>        53/UDP,53/TCP   24s
service/kubernetes-dashboard   NodePort    10.0.0.237   <none>        443:30001/TCP   8d

NAME                                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns                1         1         1            1           24s
deployment.apps/kubernetes-dashboard   1         1         1            1           8d

NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-56684f94d6                1         1         1       24s
replicaset.apps/kubernetes-dashboard-65f974f565   0         0         0       8d
replicaset.apps/kubernetes-dashboard-7dffbccd68   1         1         1       8d

3.9 创建一个测试pod资源

[root@master1 ~]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  name: dns-test
spec:
  containers:
  - name: busybox
    image: busybox:1.28.4
    args:
    - /bin/sh
    - -c
    - sleep 36000
  restartPolicy: Never
[root@master1 ~]# kubectl create -f pod3.yaml 

3.10 验证dns解析

[root@master1 ~]# kubectl get svc
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP        16d
nginx-headless   ClusterIP   None         <none>        80/TCP         46h
nginx-service    NodePort    10.0.0.31    <none>        80:37384/TCP   46h
[root@master1 ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
dns-test                            1/1     Running   0          47s

3.11 解析kubernetes和nginx-service名称

[root@master1 ~]# kubectl exec -it dns-test sh
[root@master1 ~]# kubectl exec -it dns-test sh
/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2

nslookup: can't resolve 'kubernetes'
/ # exit

3.12 nodes节点重启一下flanneld组件和docker

[root@node01 ~]# systemctl restart flanneld
[root@node01 ~]# systemctl restart docker

可以了

[root@master1 ~]# kubectl exec -it dns-test sh
/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
/ # nslookup nginx-service
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      nginx-service
Address 1: 10.0.0.31 nginx-service.default.svc.cluster.local
/ # nslookup nginx-deployment-78cdb5b557-lt52n.nginx
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

nslookup: can't resolve 'nginx-deployment-78cdb5b557-lt52n.nginx'
^C[root@master1 ~]# kubectl get svc -o wide
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE   SELECTOR
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP        16d   <none>
nginx-headless   ClusterIP   None         <none>        80/TCP         47h   app=nginx
nginx-service    NodePort    10.0.0.31    <none>        80:37384/TCP   47h   app=nginx

dns 可以解析资源名称

3.13 创建statefulset 资源

无头服务就是没有clusterIP,他的IP是动态的

[root@master1 ~]# vim statefulset.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1beta1  
kind: StatefulSet  
metadata:
  name: nginx-statefulset  
  namespace: default
spec:
  serviceName: nginx  
  replicas: 3  
  selector:
    matchLabels:  
       app: nginx
  template:  
    metadata:
      labels:
        app: nginx  
    spec:
      containers:
      - name: nginx
        image: nginx:latest  
        ports:
        - containerPort: 80  

3.14 创建前先清理下之前的环境

[root@master1 ~]# rm -rf coredns.yaml 
[root@master1 ~]# kubectl delete -f .
[root@master1 ~]# kubectl apply -f statefulset.yaml 
service/nginx created
statefulset.apps/nginx-statefulset created
[root@master1 ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx-deployment-78cdb5b557-c2cwv   1/1     Running             2          2d2h
nginx-deployment-78cdb5b557-d6wlp   1/1     Running             1          2d2h
nginx-deployment-78cdb5b557-lt52n   1/1     Running             1          2d2h
nginx-statefulset-0                 0/1     ContainerCreating   0          12s
[root@master1 ~]# kubectl get pods -w
NAME                                READY   STATUS              RESTARTS   AGE
nginx-deployment-78cdb5b557-c2cwv   1/1     Running             2          2d2h
nginx-deployment-78cdb5b557-d6wlp   1/1     Running             1          2d2h
nginx-deployment-78cdb5b557-lt52n   1/1     Running             1          2d2h
nginx-statefulset-0                 1/1     Running             0          24s
nginx-statefulset-1                 0/1     ContainerCreating   0          11s
nginx-statefulset-1   1/1   Running   0     15s
nginx-statefulset-2   0/1   Pending   0     0s
nginx-statefulset-2   0/1   Pending   0     0s
nginx-statefulset-2   0/1   ContainerCreating   0     0s
nginx-statefulset-2   1/1   Running   0     4s
^C[root@master1 ~]# kubectl get svc,deploy
NAME                     TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
service/kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP        17d
service/nginx            ClusterIP   None         <none>        80/TCP         63s
service/nginx-headless   ClusterIP   None         <none>        80/TCP         2d
service/nginx-service    NodePort    10.0.0.31    <none>        80:37384/TCP   2d

NAME                                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/nginx-deployment   3         3         3            3           2d2h
[root@master1 ~]# kubectl delete svc nginx-headless
service "nginx-headless" deleted
[root@master1 ~]# kubectl delete svc nginx-service
service "nginx-service" deleted

  • 顺序创建Pod

对于一个拥有 N 个副本的 StatefulSet,Pod 被部署时是按照 {0 …… N-1} 的序号顺序创建的。在第一个终端中使用 kubectl get 检查输出

备注:在 nginx-statefulset-0 Pod 处于 Running和Ready 状态后 nginx-statefulset-1 Pod 才会被启动。

StatefulSet 中的 Pod 拥有一个唯一的顺序索引和稳定的网络身份标识。

这个标志基于 StatefulSet 控制器分配给每个 Pod 的唯一顺序索引。Pod 的名称的形式为<statefulset name>-<ordinal index>

每个 Pod 都拥有一个基于其顺序索引的稳定的主机名

3.15 创建测试dns容器

[root@master1 ~]# kubectl apply -f pod3.yaml 
pod/dns-test created
[root@master1 ~]# kubectl exec -it dns-test sh
/ # nslookup nginx
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      nginx
Address 1: 172.17.88.5 nginx-statefulset-2.nginx.default.svc.cluster.local
Address 2: 172.17.88.4 nginx-statefulset-0.nginx.default.svc.cluster.local
Address 3: 172.17.57.2 172-17-57-2.nginx.default.svc.cluster.local
Address 4: 172.17.57.5 nginx-statefulset-1.nginx.default.svc.cluster.local
Address 5: 172.17.57.4 172-17-57-4.nginx.default.svc.cluster.local
Address 6: 172.17.88.3 172-17-88-3.nginx.default.svc.cluster.local
/ # nslookup nginx-statefulset-0
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

^C
/ # nslookup nginx-statefulset-0.nginx
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      nginx-statefulset-0.nginx
Address 1: 172.17.88.4 nginx-statefulset-0.nginx.default.svc.cluster.local

备注:可以手动将statefulset的pod删掉,然后K8S会根据控制器的副本数量自动重新创建,此时再次解析,会发现IP地址会变化

3.16 查看网络

[root@master1 ~]# kubectl get all -o wide
NAME                                    READY   STATUS    RESTARTS   AGE     IP            NODE              NOMINATED NODE
pod/dns-test                            1/1     Running   0          5m49s   172.17.57.6   192.168.247.143   <none>
pod/nginx-deployment-78cdb5b557-c2cwv   1/1     Running   2          2d2h    172.17.88.3   192.168.247.144   <none>
pod/nginx-deployment-78cdb5b557-d6wlp   1/1     Running   1          2d2h    172.17.57.2   192.168.247.143   <none>
pod/nginx-deployment-78cdb5b557-lt52n   1/1     Running   1          2d2h    172.17.57.4   192.168.247.143   <none>
pod/nginx-statefulset-0                 1/1     Running   0          8m50s   172.17.88.4   192.168.247.144   <none>
pod/nginx-statefulset-1                 1/1     Running   0          8m37s   172.17.57.5   192.168.247.143   <none>
pod/nginx-statefulset-2                 1/1     Running   0          8m22s   172.17.88.5   192.168.247.144   <none>

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE     SELECTOR
service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   17d     <none>
service/nginx        ClusterIP   None         <none>        80/TCP    8m50s   app=nginx

NAME                               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES         SELECTOR
deployment.apps/nginx-deployment   3         3         3            3           2d2h   nginx1       nginx:1.15.4   app=nginx

NAME                                          DESIRED   CURRENT   READY   AGE    CONTAINERS   IMAGES         SELECTOR
replicaset.apps/nginx-deployment-78cdb5b557   3         3         3       2d2h   nginx1       nginx:1.15.4   app=nginx,pod-template-hash=78cdb5b557

NAME                                 DESIRED   CURRENT   AGE     CONTAINERS   IMAGES
statefulset.apps/nginx-statefulset   3         3         8m50s   nginx        nginx:latest

3.17 总结

statefulset与deployment的区别之一,statefulset还有一个身份

身份三要素

域名:nginx-statefulset-0.nginx

主机名:nginx-statefulset-0

存储(PVC)

序列编号区分唯一身份

statefulset适用于podIP会一直变化的情况

顺序终止 Pod

控制器会按照与 Pod 序号索引相反的顺序每次删除一个 Pod。在删除下一个 Pod 前会等待上一个被完全关闭。

更新 StatefulSet

StatefulSet 里的 Pod 采用和序号相反的顺序更新。在更新下一个 Pod 前,StatefulSet 控制器终止每个 Pod 并等待它们变成 Running 和 Ready。请注意,虽然在顺序后继者变成 Running 和 Ready 之前 StatefulSet 控制器不会更新下一个 Pod,但它仍然会重建任何在更新过程中发生故障的 Pod,使用的是它们当前的版本。已经接收到更新请求的 Pod 将会被恢复为更新的版本,没有收到请求的 Pod 则会被恢复为之前的版本。像这样,控制器尝试继续使应用保持健康并在出现间歇性故障时保持更新的一致性。

四:daemonset

4.1 概述

在每一个node上运行一个pod,新加入的node同样会自动运行一个pod

应用场景:代理,监控,比如elk中的logstash 或者efk中的firebeat

官方文档:https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

4.2 演示

volumemounts是pod的挂载点

[root@master1 ~]# vim daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet 
metadata:
  name: nginx-daemonset
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.4
        ports:
        - containerPort: 80

4.2.1 创建daemonset资源

[root@master1 ~]# kubectl apply -f daemonset.yaml 
daemonset.apps/nginx-daemonset created
^C[root@master1 ~]# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE    IP            NODE              NOMINATED NODE
dns-test                            1/1     Running   0          9m9s   172.17.57.6   192.168.247.143   <none>
nginx-daemonset-8mdhj               1/1     Running   0          41s    172.17.88.6   192.168.247.144   <none>
nginx-daemonset-ggdv2               1/1     Running   0          41s    172.17.57.7   192.168.247.143   <none>
nginx-deployment-78cdb5b557-c2cwv   1/1     Running   2          2d2h   172.17.88.3   192.168.247.144   <none>
nginx-deployment-78cdb5b557-d6wlp   1/1     Running   1          2d2h   172.17.57.2   192.168.247.143   <none>
nginx-deployment-78cdb5b557-lt52n   1/1     Running   1          2d2h   172.17.57.4   192.168.247.143   <none>
nginx-statefulset-0                 1/1     Running   0          12m    172.17.88.4   192.168.247.144   <none>
nginx-statefulset-1                 1/1     Running   0          11m    172.17.57.5   192.168.247.143   <none>
nginx-statefulset-2                 1/1     Running   0          11m    172.17.88.5   192.168.247.144   <none>

daemonset会在每个node节点上都创建一个pods

五:job

job分为普通任务(job)和定时任务(cronjob)

5.1 job 一次性执行

应用场景:离线数据处理,视频解码等业务

大数据分析计算的服务

官方文档:https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/

5.1.1 演示

重试次数默认是6次,修改为4次,当遇到异常时Never状态会重启,所以要设定次数。

回滚次数限制为4次

[root@master1 ~]#  vim job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4

5.1.2 创建job

[root@master1 ~]# kubectl apply -f job.yaml 
job.batch/pi created
[root@master1 ~]# kubectl get pods -w
NAME                                READY   STATUS              RESTARTS   AGE
pi-9jf2w                            0/1     ContainerCreating   0          4s
pi-9jf2w   1/1   Running   0     62s
pi-9jf2w   0/1   Completed   0     76s
^C[root@master1 ~]# 
^C[root@master1 ~]# kubectl logs pi-9jf2w
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901

5.1.3 查看job资源

[root@master1 ~]# kubectl get job
NAME   COMPLETIONS   DURATION   AGE
pi     1/1           76s        2m57s
#此时若用kubectl get all也可以看到job资源

5.1.4 清除job资源

[root@master1 ~]# kubectl delete -f job.yaml 
job.batch "pi" deleted

5.2 cronjob 周期性任务

类似于linux中的crontab

应用场景:定时通知,定时备份

https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/

5.2.1 演示

建立每分钟打印hello的任务

[root@master1 ~]# vim cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

5.2.2 查看cronjob资源

[root@master1 ~]# kubectl get cronjob
No resources found.
[root@master1 ~]# kubectl apply -f cronjob.yaml 
cronjob.batch/hello created
[root@master1 ~]# kubectl get cronjob
NAME    SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
hello   */1 * * * *   False     0        <none>          2s

5.2.3 -w查看pod运行状态

[root@master1 ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
dns-test                            1/1     Running             0          27m
hello-1589686080-qtc4c              0/1     ContainerCreating   0          3s
[root@master1 ~]# kubectl get pods -w
NAME                                READY   STATUS              RESTARTS   AGE
dns-test                            1/1     Running             0          28m
hello-1589686080-qtc4c              0/1     ContainerCreating   0          7s
hello-1589686080-qtc4c   1/1   Running   0     11s
hello-1589686080-qtc4c   0/1   Completed   0     12s
hello-1589686140-tj66f   0/1   Pending   0     0s
hello-1589686140-tj66f   0/1   ContainerCreating   0     0s
hello-1589686140-tj66f   0/1   Completed   0     9s
hello-1589686200-phc6t   0/1   Pending   0     0s
hello-1589686200-phc6t   0/1   ContainerCreating   0     0s
hello-1589686200-phc6t   0/1   Completed   0     8s
hello-1589686260-dg6gp   0/1   Pending   0     0s
hello-1589686260-dg6gp   0/1   ContainerCreating   0     0s
hello-1589686260-dg6gp   0/1   Completed   0     13s
hello-1589686080-qtc4c   0/1   Terminating   0     3m21s
hello-1589686320-n9r2f   0/1   Pending   0     0s
hello-1589686320-n9r2f   0/1   ContainerCreating   0     0s

^C[root@master1 ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
dns-test                            1/1     Running             0          32m
hello-1589686140-tj66f              0/1     Completed           0          3m16s
hello-1589686200-phc6t              0/1     Completed           0          2m15s
hello-1589686260-dg6gp              0/1     Completed           0          75s
hello-1589686320-n9r2f              0/1     ContainerCreating   0          15s

5.2.4 查看log日志中的反馈信息

[root@master1 ~]# kubectl logs hello-1589686380-flrtj
Sun May 17 03:33:24 UTC 2020
Hello from the Kubernetes cluster
[root@master1 ~]# kubectl logs hello-1589686440-ngj59 
Sun May 17 03:34:19 UTC 2020
Hello from the Kubernetes cluster
[root@master1 ~]# kubectl delete -f cronjob.yaml 
cronjob.batch "hello" deleted
[root@master1 ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
dns-test                            1/1     Running   0          35m

备注:使用完cronjob要慎重,用完之后要删掉,不然会占用很多资源

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章