理論+實操:K8S搭建dns內部服務與控制器controlls五種模式

故障排查

狀態值 描述
pending pod創建已經提交到kubernetes;但是,因爲某些原因而不能順利創建——例如下載鏡像慢、調度不成功
running pod已經綁定到一個節點,並且已經創建了所有容器;至少有一個容器正在運行中,或正在啓動或重新啓動
succeeded/competed pod中的所有容器都已成功終止,不會重新啓動
failed pod的所有容器均已終止,且至少有一個容器已在故障中終止;也就是說,容器要麼以非零狀態退出,要麼被系統終止
unknown 由於某種原因apiserver無法獲得pod的狀態,通常是由於master與pod所在主機kubelet通信時出錯

解決思路:

  • 查看pod事件

kubectl describe type name_prefix

  • 查看pod日誌(failed狀態下重點看)

kubectl logs pod_name

  • 進入pod(狀態爲running,但是服務沒有提供的情況下)

kubectl exec -it pod_name bash

一:控制器的類型

控制器:又稱爲工作負載,分別包含以下類型控制器

1.deployment 無狀態化部署控制器

2.statefulset 有狀態化部署

3.daemonset 一次部署,所有的節點都會部署

4.job 一次性任務

5.cronjob 週期性任務

一: pod與控制器之間的關係

controllers:在集羣上管理和運行容器的對象,通過labels-selector相關聯

pod通過控制器實現應用的運維,比如伸縮,升級等

[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-fGipEOsF-1589695960249)(G:\GSY\Documents\typora圖片\1589502342922.png)]

二:deployment

部署無狀態應用

應用場景:web服務,比如tomcat

2.1 deployment概述

  • deployment管理pod和replicaset
  • 具有上線部署、副本設定、滾動升級回滾等功能
  • 提供聲明式更新,例如只更新一個新的image

2.1 演示

2.1.1 編寫yaml文件

[root@master1 gsy]# vim nginx-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx1
        image: nginx:1.15.4
        ports:
        - containerPort: 80

2.1.2 創建pod

[root@master1 gsy]# kubectl create -f nginx-deployment.yaml 
deployment.apps/nginx-deployment created
[root@master1 gsy]# kubectl get pods -w
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-78cdb5b557-c2cwv   1/1     Running   0          29s
nginx-deployment-78cdb5b557-d6wlp   1/1     Running   0          29s
nginx-deployment-78cdb5b557-lt52n   1/1     Running   0          29s
pod9                                1/1     Running   0          41h
^C[root@master1 gsy]# kubectl get all
NAME                                    READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-78cdb5b557-c2cwv   1/1     Running   0          14m
pod/nginx-deployment-78cdb5b557-d6wlp   1/1     Running   0          14m
pod/nginx-deployment-78cdb5b557-lt52n   1/1     Running   0          14m
pod/pod9                                1/1     Running   0          42h

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   14d

NAME                               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   3         3         3            3           14m

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-78cdb5b557   3         3         3       14m

2.1.3 查看控制器參數

參數有兩種查看,一種是describe,還有一種是edit編輯的方式

通常來說,edit會更詳細一點

[root@master1 gsy]# kubectl describe deploy nginx-deployment
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Fri, 15 May 2020 08:42:18 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx1:
    Image:        nginx:1.15.4
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-deployment-78cdb5b557 (3/3 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  17m   deployment-controller  Scaled up replica set nginx-deployment-78cdb5b557 to 3
[root@master1 gsy]# kubectl edit deploy nginx-deployment
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2020-05-15T00:42:18Z
  generation: 1
  labels:
    app: nginx
  name: nginx-deployment
  namespace: default
  resourceVersion: "897701"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx-deployment
  uid: ee238868-9644-11ea-a668-000c29db840b
'spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:	#匹配標籤
      app: nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate'
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
'    spec:
      containers:
      - image: nginx:1.15.4
        imagePullPolicy: IfNotPresent
        name: nginx1
        ports:
        - containerPort: 80
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 3
  conditions:
  - lastTransitionTime: 2020-05-15T00:42:30Z
    lastUpdateTime: 2020-05-15T00:42:30Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: 2020-05-15T00:42:18Z
    lastUpdateTime: 2020-05-15T00:42:30Z
    message: ReplicaSet "nginx-deployment-78cdb5b557" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
 observedGeneration: 1
  readyReplicas: 3
  replicas: 3
  updatedReplicas: 3

滾動更新機制:先創建新資源,再關閉舊資源,一個一個替換

  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate'

matchlabels:匹配標籤

maxsurge 最多擴容容量是百分之25

maxunavailable 最大縮減的數量不能超過25

2.1.4 查看控制器歷史版本

回滾機制就是依此執行

[root@master1 gsy]#  kubectl rollout history deployment/nginx-deployment
deployment.extensions/nginx-deployment 
REVISION  CHANGE-CAUSE
1         <none>

三:statefulset

部署有狀態應用

解決pod獨立生命週期,保持pod啓動順序和唯一性

穩定,唯一的網絡標識符,持久存儲(例如:etcd配置文件,節點地址會發生變化,將無法正常使用)

有序部署,可以穩定的部署、擴展、刪除和終止(例如:mysql主從關係,先啓動主,在啓動從)

有序,滾動更新

應用場景:數據庫等獨立性服務

官方文檔:https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/

3.1 deployment和statefulset區別

  • 無狀態

1.deployment控制器認爲所有的pod都是一樣的

2.不考慮pod開啓順序的要求

3.不考慮在哪個node節點上運行

4.可以隨意擴容和縮容

  • 有狀態

1.實例之間有差別,每個實例都有自己的獨特性,元數據不同,例如:etcd、zookeeper

zookeeper微服務用的多

2.實例之間不對等的關係,以及依靠外部存儲的應用

3.2 常規service和無頭服務的區別

常規service:是一組pod的訪問策略,提供cluester-ip羣集之間通訊,還提供負載均衡和服務發現

無頭服務headless service:無頭服務,不需要cluster-ip,直接綁定具體的pod的IP

無頭服務通常用於statefulset有狀態部署

3.3 先創建之前deployment控制器pod的常規服務

[root@master1 gsy]# vim nginx-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx

cluster-IP實現內部羣集通信地址

[root@master1 gsy]# kubectl create -f nginx-service.yaml 
service/nginx-service created
[root@master1 gsy]# kubectl get service
NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes      ClusterIP   10.0.0.1     <none>        443/TCP        14d
nginx-service   NodePort    10.0.0.31    <none>        80:37384/TCP   11s

3.3 在node節點去測試集羣內部通訊

[root@node01 ~]# curl 10.0.0.31
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

有一個節點發現不可以訪問,重啓一下flannel和docker

[root@node02 ~]#  curl 10.0.0.31
^C
[root@node02 ~]# systemctl restart flanneld.service
[root@node02 ~]# systemctl restart docker
[root@node02 system]#  curl 10.0.0.31
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

然後另外一個node1節點發現又不可以了,

[root@node01 ~]# curl 10.0.0.31
^C
[root@node01 ~]# 

3.4 創建headless無頭服務

因爲statefulset控制器創建的pod資源是一個動態的IP地址,ip地址不同,會影響服務的正常使用

所以常用於綁定dns訪問,這就要安裝dns服務

創建statefulset的pod資源

無頭服務:直接綁定podIP提供服務

headless service 無頭服務,不需要cluster-IP,直接綁定具體的POD的IP

[root@master1 gsy]# vim headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-headless
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None	#這個是無頭服務的核心
  selector:
    app: nginx
[root@master1 gsy]# kubectl apply -f headless.yaml 
service/nginx-headless created
[root@master1 gsy]# kubectl get service
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP        14d
nginx-headless   ClusterIP   None         <none>        80/TCP         4s
nginx-service    NodePort    10.0.0.31    <none>        80:37384/TCP   14m

3.5 查看服務,出現一個沒有cluster-ip的服務

[root@master1 ~]# kubectl get svc
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP        16d
nginx-headless   ClusterIP   None         <none>        80/TCP         45h
nginx-service    NodePort    10.0.0.31    <none>        80:37384/TCP   46h

3.6 開始做dns服務

在集羣中定義的每個 Service(包括 DNS 服務器自身)都會被指派一個 DNS 名稱。 默認,一個客戶端 Pod 的 DNS 搜索列表將包含該 Pod 自己的 Namespace 和集羣默認域。

假設在 Kubernetes 集羣的 Namespace bar 中,定義了一個Service foo。 運行在Namespace bar 中的一個 Pod,可以簡單地通過 DNS 查詢 foo 來找到該 Service。 運行在 Namespace quux 中的一個 Pod 可以通過 DNS 查詢 foo.bar 找到該 Service。

從 Kubernetes v1.12 開始,CoreDNS 是推薦的 DNS 服務器,取代了kube-dns。 但是,默認情況下,某些 Kubernetes 安裝程序工具仍可能安裝 kube-dns。 請參閱安裝程序提供的文檔,以瞭解默認情況下安裝了哪個 DNS 服務器。

CoreDNS 的部署,作爲一個 Kubernetes 服務,通過靜態 IP 的方式暴露。 CoreDNS 和 kube-dns 服務在 metadata.name 字段中均被命名爲 kube-dns。 這樣做是爲了與依靠傳統 kube-dns 服務名稱來解析集羣內部地址的工作負載具有更大的互操作性。它抽象出哪個 DNS 提供程序在該公共端點後面運行的實現細節。

kubelet 使用 --cluster-dns = <dns-service-ip> 標誌將 DNS 傳遞到每個容器。

如果 Pod 的 dnsPolicy 設置爲 “default”,則它將從 Pod 運行所在節點上的配置中繼承名稱解析配置。 Pod 的 DNS 解析應該與節點相同。

如果您不想這樣做,或者想要爲 Pod 使用其他 DNS 配置,則可以 使用 kubelet 的 --resolv-conf 標誌。 將此標誌設置爲 "” 以避免 Pod 繼承 DNS。 將其設置爲有效的文件路徑以指定除以下以外的文件 /etc/resolv.conf,用於 DNS 繼承

3.7 創建dns的服務

[root@master1 ~]# cp /abc/k8s/coredns.yaml .
[root@master1 ~]# cat coredns.yaml	#這個

apiVersion: v1
kind: ServiceAccount	#爲Pod中的進程和外部用戶提供身份信息,系統賬戶
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole	#ClusterRole對象可以授予整個集羣範圍內資源訪問權限
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding	#RoleBinding可以將同一namespace中的subject(用戶)
							綁定到某個具有特定權限的Role下,則此subject即具有該Role定義的權限。
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
#CoreDNS 是模塊化且可插拔的 DNS 服務器,每個插件都爲 CoreDNS 添加了新功能
#可以通過維護Corefile,即CoreDNS 配置文件。 
#集羣管理員可以修改 CoreDNS Corefile 的 ConfigMap,以更改服務發現的工作方式。
#在 Kubernetes 中,已經使用以下默認 Corefile 配置安裝了 CoreDNS。
Corefile 配置包括以下 CoreDNS 的 插件:
錯誤:錯誤記錄到 stdout。
健康:CoreDNS 的健康報告給 http://localhost:8080/health。
kubernetes:CoreDNS 將基於 Kubernetes 的服務和 Pod 的 IP 答覆 DNS 查詢。

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure	#與kube-dns向前兼容
#可以使用 pods verified 選項,該選項僅在相同名稱空間中存在具有匹配 IP 的 pod 時才返回 A 記錄。 
#如果您不使用 Pod 記錄,則可以使用 pods disabled 選項。
            upstream	#用來解析指向外部主機的服務
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        #CoreDNS的度量標準以Prometheus格式在 http://localhost:9153/metrics 上提供。
        proxy . /etc/resolv.conf
        #不在 Kubernetes 集羣域內的任何查詢都將轉發到預定義的解析器 (/etc/resolv.conf).
        cache 30
        #啓用前端緩存
        loop
        #檢測到簡單的轉發循環,如果發現死循環,則中止 CoreDNS 進程。
        reload
        #允許自動重新加載已更改的 Corefile。 編輯 ConfigMap 配置後,請等待兩分鐘,以使更改生效。
        loadbalance
        #輪詢 DNS 負載均衡器,它在應答中隨機分配 A,AAAA 和 MX 記錄的順序。
    }
#修改 ConfigMap 來修改默認的 CoreDNS 行爲。
CoreDNS 能夠使用 proxy plugin. 配置存根域和上游域名服務器。
使用 CoreDN 配置存根域和上游域名服務器見官方文檔

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      serviceAccountName: coredns
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
      - name: coredns
        image: coredns/coredns:1.2.2
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.0.0.2 
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
[root@master1 ~]# ls -l 
total 64
-rw-r--r--. 1 root root  191 May 13 14:45 1nodeselector.yaml
-rw-------. 1 root root 1935 Apr 30 08:53 anaconda-ks.cfg
-rwxr-xr-x. 1 root root 4290 May 17 08:49 coredns.yaml

3.8 查看創建後的狀態

[root@master1 ~]# kubectl create -f coredns.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.extensions/coredns created
service/kube-dns created
[root@master1 ~]# kubectl get all -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
pod/coredns-56684f94d6-qfkf7                1/1     Running   0          24s
pod/kubernetes-dashboard-7dffbccd68-l4tcd   1/1     Running   2          8d

NAME                           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/kube-dns               ClusterIP   10.0.0.2     <none>        53/UDP,53/TCP   24s
service/kubernetes-dashboard   NodePort    10.0.0.237   <none>        443:30001/TCP   8d

NAME                                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns                1         1         1            1           24s
deployment.apps/kubernetes-dashboard   1         1         1            1           8d

NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-56684f94d6                1         1         1       24s
replicaset.apps/kubernetes-dashboard-65f974f565   0         0         0       8d
replicaset.apps/kubernetes-dashboard-7dffbccd68   1         1         1       8d

3.9 創建一個測試pod資源

[root@master1 ~]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  name: dns-test
spec:
  containers:
  - name: busybox
    image: busybox:1.28.4
    args:
    - /bin/sh
    - -c
    - sleep 36000
  restartPolicy: Never
[root@master1 ~]# kubectl create -f pod3.yaml 

3.10 驗證dns解析

[root@master1 ~]# kubectl get svc
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP        16d
nginx-headless   ClusterIP   None         <none>        80/TCP         46h
nginx-service    NodePort    10.0.0.31    <none>        80:37384/TCP   46h
[root@master1 ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
dns-test                            1/1     Running   0          47s

3.11 解析kubernetes和nginx-service名稱

[root@master1 ~]# kubectl exec -it dns-test sh
[root@master1 ~]# kubectl exec -it dns-test sh
/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2

nslookup: can't resolve 'kubernetes'
/ # exit

3.12 nodes節點重啓一下flanneld組件和docker

[root@node01 ~]# systemctl restart flanneld
[root@node01 ~]# systemctl restart docker

可以了

[root@master1 ~]# kubectl exec -it dns-test sh
/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
/ # nslookup nginx-service
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      nginx-service
Address 1: 10.0.0.31 nginx-service.default.svc.cluster.local
/ # nslookup nginx-deployment-78cdb5b557-lt52n.nginx
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

nslookup: can't resolve 'nginx-deployment-78cdb5b557-lt52n.nginx'
^C[root@master1 ~]# kubectl get svc -o wide
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE   SELECTOR
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP        16d   <none>
nginx-headless   ClusterIP   None         <none>        80/TCP         47h   app=nginx
nginx-service    NodePort    10.0.0.31    <none>        80:37384/TCP   47h   app=nginx

dns 可以解析資源名稱

3.13 創建statefulset 資源

無頭服務就是沒有clusterIP,他的IP是動態的

[root@master1 ~]# vim statefulset.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1beta1  
kind: StatefulSet  
metadata:
  name: nginx-statefulset  
  namespace: default
spec:
  serviceName: nginx  
  replicas: 3  
  selector:
    matchLabels:  
       app: nginx
  template:  
    metadata:
      labels:
        app: nginx  
    spec:
      containers:
      - name: nginx
        image: nginx:latest  
        ports:
        - containerPort: 80  

3.14 創建前先清理下之前的環境

[root@master1 ~]# rm -rf coredns.yaml 
[root@master1 ~]# kubectl delete -f .
[root@master1 ~]# kubectl apply -f statefulset.yaml 
service/nginx created
statefulset.apps/nginx-statefulset created
[root@master1 ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx-deployment-78cdb5b557-c2cwv   1/1     Running             2          2d2h
nginx-deployment-78cdb5b557-d6wlp   1/1     Running             1          2d2h
nginx-deployment-78cdb5b557-lt52n   1/1     Running             1          2d2h
nginx-statefulset-0                 0/1     ContainerCreating   0          12s
[root@master1 ~]# kubectl get pods -w
NAME                                READY   STATUS              RESTARTS   AGE
nginx-deployment-78cdb5b557-c2cwv   1/1     Running             2          2d2h
nginx-deployment-78cdb5b557-d6wlp   1/1     Running             1          2d2h
nginx-deployment-78cdb5b557-lt52n   1/1     Running             1          2d2h
nginx-statefulset-0                 1/1     Running             0          24s
nginx-statefulset-1                 0/1     ContainerCreating   0          11s
nginx-statefulset-1   1/1   Running   0     15s
nginx-statefulset-2   0/1   Pending   0     0s
nginx-statefulset-2   0/1   Pending   0     0s
nginx-statefulset-2   0/1   ContainerCreating   0     0s
nginx-statefulset-2   1/1   Running   0     4s
^C[root@master1 ~]# kubectl get svc,deploy
NAME                     TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
service/kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP        17d
service/nginx            ClusterIP   None         <none>        80/TCP         63s
service/nginx-headless   ClusterIP   None         <none>        80/TCP         2d
service/nginx-service    NodePort    10.0.0.31    <none>        80:37384/TCP   2d

NAME                                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/nginx-deployment   3         3         3            3           2d2h
[root@master1 ~]# kubectl delete svc nginx-headless
service "nginx-headless" deleted
[root@master1 ~]# kubectl delete svc nginx-service
service "nginx-service" deleted

  • 順序創建Pod

對於一個擁有 N 個副本的 StatefulSet,Pod 被部署時是按照 {0 …… N-1} 的序號順序創建的。在第一個終端中使用 kubectl get 檢查輸出

備註:在 nginx-statefulset-0 Pod 處於 Running和Ready 狀態後 nginx-statefulset-1 Pod 纔會被啓動。

StatefulSet 中的 Pod 擁有一個唯一的順序索引和穩定的網絡身份標識。

這個標誌基於 StatefulSet 控制器分配給每個 Pod 的唯一順序索引。Pod 的名稱的形式爲<statefulset name>-<ordinal index>

每個 Pod 都擁有一個基於其順序索引的穩定的主機名

3.15 創建測試dns容器

[root@master1 ~]# kubectl apply -f pod3.yaml 
pod/dns-test created
[root@master1 ~]# kubectl exec -it dns-test sh
/ # nslookup nginx
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      nginx
Address 1: 172.17.88.5 nginx-statefulset-2.nginx.default.svc.cluster.local
Address 2: 172.17.88.4 nginx-statefulset-0.nginx.default.svc.cluster.local
Address 3: 172.17.57.2 172-17-57-2.nginx.default.svc.cluster.local
Address 4: 172.17.57.5 nginx-statefulset-1.nginx.default.svc.cluster.local
Address 5: 172.17.57.4 172-17-57-4.nginx.default.svc.cluster.local
Address 6: 172.17.88.3 172-17-88-3.nginx.default.svc.cluster.local
/ # nslookup nginx-statefulset-0
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

^C
/ # nslookup nginx-statefulset-0.nginx
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      nginx-statefulset-0.nginx
Address 1: 172.17.88.4 nginx-statefulset-0.nginx.default.svc.cluster.local

備註:可以手動將statefulset的pod刪掉,然後K8S會根據控制器的副本數量自動重新創建,此時再次解析,會發現IP地址會變化

3.16 查看網絡

[root@master1 ~]# kubectl get all -o wide
NAME                                    READY   STATUS    RESTARTS   AGE     IP            NODE              NOMINATED NODE
pod/dns-test                            1/1     Running   0          5m49s   172.17.57.6   192.168.247.143   <none>
pod/nginx-deployment-78cdb5b557-c2cwv   1/1     Running   2          2d2h    172.17.88.3   192.168.247.144   <none>
pod/nginx-deployment-78cdb5b557-d6wlp   1/1     Running   1          2d2h    172.17.57.2   192.168.247.143   <none>
pod/nginx-deployment-78cdb5b557-lt52n   1/1     Running   1          2d2h    172.17.57.4   192.168.247.143   <none>
pod/nginx-statefulset-0                 1/1     Running   0          8m50s   172.17.88.4   192.168.247.144   <none>
pod/nginx-statefulset-1                 1/1     Running   0          8m37s   172.17.57.5   192.168.247.143   <none>
pod/nginx-statefulset-2                 1/1     Running   0          8m22s   172.17.88.5   192.168.247.144   <none>

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE     SELECTOR
service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   17d     <none>
service/nginx        ClusterIP   None         <none>        80/TCP    8m50s   app=nginx

NAME                               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES         SELECTOR
deployment.apps/nginx-deployment   3         3         3            3           2d2h   nginx1       nginx:1.15.4   app=nginx

NAME                                          DESIRED   CURRENT   READY   AGE    CONTAINERS   IMAGES         SELECTOR
replicaset.apps/nginx-deployment-78cdb5b557   3         3         3       2d2h   nginx1       nginx:1.15.4   app=nginx,pod-template-hash=78cdb5b557

NAME                                 DESIRED   CURRENT   AGE     CONTAINERS   IMAGES
statefulset.apps/nginx-statefulset   3         3         8m50s   nginx        nginx:latest

3.17 總結

statefulset與deployment的區別之一,statefulset還有一個身份

身份三要素

域名:nginx-statefulset-0.nginx

主機名:nginx-statefulset-0

存儲(PVC)

序列編號區分唯一身份

statefulset適用於podIP會一直變化的情況

順序終止 Pod

控制器會按照與 Pod 序號索引相反的順序每次刪除一個 Pod。在刪除下一個 Pod 前會等待上一個被完全關閉。

更新 StatefulSet

StatefulSet 裏的 Pod 採用和序號相反的順序更新。在更新下一個 Pod 前,StatefulSet 控制器終止每個 Pod 並等待它們變成 Running 和 Ready。請注意,雖然在順序後繼者變成 Running 和 Ready 之前 StatefulSet 控制器不會更新下一個 Pod,但它仍然會重建任何在更新過程中發生故障的 Pod,使用的是它們當前的版本。已經接收到更新請求的 Pod 將會被恢復爲更新的版本,沒有收到請求的 Pod 則會被恢復爲之前的版本。像這樣,控制器嘗試繼續使應用保持健康並在出現間歇性故障時保持更新的一致性。

四:daemonset

4.1 概述

在每一個node上運行一個pod,新加入的node同樣會自動運行一個pod

應用場景:代理,監控,比如elk中的logstash 或者efk中的firebeat

官方文檔:https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

4.2 演示

volumemounts是pod的掛載點

[root@master1 ~]# vim daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet 
metadata:
  name: nginx-daemonset
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.4
        ports:
        - containerPort: 80

4.2.1 創建daemonset資源

[root@master1 ~]# kubectl apply -f daemonset.yaml 
daemonset.apps/nginx-daemonset created
^C[root@master1 ~]# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE    IP            NODE              NOMINATED NODE
dns-test                            1/1     Running   0          9m9s   172.17.57.6   192.168.247.143   <none>
nginx-daemonset-8mdhj               1/1     Running   0          41s    172.17.88.6   192.168.247.144   <none>
nginx-daemonset-ggdv2               1/1     Running   0          41s    172.17.57.7   192.168.247.143   <none>
nginx-deployment-78cdb5b557-c2cwv   1/1     Running   2          2d2h   172.17.88.3   192.168.247.144   <none>
nginx-deployment-78cdb5b557-d6wlp   1/1     Running   1          2d2h   172.17.57.2   192.168.247.143   <none>
nginx-deployment-78cdb5b557-lt52n   1/1     Running   1          2d2h   172.17.57.4   192.168.247.143   <none>
nginx-statefulset-0                 1/1     Running   0          12m    172.17.88.4   192.168.247.144   <none>
nginx-statefulset-1                 1/1     Running   0          11m    172.17.57.5   192.168.247.143   <none>
nginx-statefulset-2                 1/1     Running   0          11m    172.17.88.5   192.168.247.144   <none>

daemonset會在每個node節點上都創建一個pods

五:job

job分爲普通任務(job)和定時任務(cronjob)

5.1 job 一次性執行

應用場景:離線數據處理,視頻解碼等業務

大數據分析計算的服務

官方文檔:https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/

5.1.1 演示

重試次數默認是6次,修改爲4次,當遇到異常時Never狀態會重啓,所以要設定次數。

回滾次數限制爲4次

[root@master1 ~]#  vim job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4

5.1.2 創建job

[root@master1 ~]# kubectl apply -f job.yaml 
job.batch/pi created
[root@master1 ~]# kubectl get pods -w
NAME                                READY   STATUS              RESTARTS   AGE
pi-9jf2w                            0/1     ContainerCreating   0          4s
pi-9jf2w   1/1   Running   0     62s
pi-9jf2w   0/1   Completed   0     76s
^C[root@master1 ~]# 
^C[root@master1 ~]# kubectl logs pi-9jf2w
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901

5.1.3 查看job資源

[root@master1 ~]# kubectl get job
NAME   COMPLETIONS   DURATION   AGE
pi     1/1           76s        2m57s
#此時若用kubectl get all也可以看到job資源

5.1.4 清除job資源

[root@master1 ~]# kubectl delete -f job.yaml 
job.batch "pi" deleted

5.2 cronjob 週期性任務

類似於linux中的crontab

應用場景:定時通知,定時備份

https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/

5.2.1 演示

建立每分鐘打印hello的任務

[root@master1 ~]# vim cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

5.2.2 查看cronjob資源

[root@master1 ~]# kubectl get cronjob
No resources found.
[root@master1 ~]# kubectl apply -f cronjob.yaml 
cronjob.batch/hello created
[root@master1 ~]# kubectl get cronjob
NAME    SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
hello   */1 * * * *   False     0        <none>          2s

5.2.3 -w查看pod運行狀態

[root@master1 ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
dns-test                            1/1     Running             0          27m
hello-1589686080-qtc4c              0/1     ContainerCreating   0          3s
[root@master1 ~]# kubectl get pods -w
NAME                                READY   STATUS              RESTARTS   AGE
dns-test                            1/1     Running             0          28m
hello-1589686080-qtc4c              0/1     ContainerCreating   0          7s
hello-1589686080-qtc4c   1/1   Running   0     11s
hello-1589686080-qtc4c   0/1   Completed   0     12s
hello-1589686140-tj66f   0/1   Pending   0     0s
hello-1589686140-tj66f   0/1   ContainerCreating   0     0s
hello-1589686140-tj66f   0/1   Completed   0     9s
hello-1589686200-phc6t   0/1   Pending   0     0s
hello-1589686200-phc6t   0/1   ContainerCreating   0     0s
hello-1589686200-phc6t   0/1   Completed   0     8s
hello-1589686260-dg6gp   0/1   Pending   0     0s
hello-1589686260-dg6gp   0/1   ContainerCreating   0     0s
hello-1589686260-dg6gp   0/1   Completed   0     13s
hello-1589686080-qtc4c   0/1   Terminating   0     3m21s
hello-1589686320-n9r2f   0/1   Pending   0     0s
hello-1589686320-n9r2f   0/1   ContainerCreating   0     0s

^C[root@master1 ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
dns-test                            1/1     Running             0          32m
hello-1589686140-tj66f              0/1     Completed           0          3m16s
hello-1589686200-phc6t              0/1     Completed           0          2m15s
hello-1589686260-dg6gp              0/1     Completed           0          75s
hello-1589686320-n9r2f              0/1     ContainerCreating   0          15s

5.2.4 查看log日誌中的反饋信息

[root@master1 ~]# kubectl logs hello-1589686380-flrtj
Sun May 17 03:33:24 UTC 2020
Hello from the Kubernetes cluster
[root@master1 ~]# kubectl logs hello-1589686440-ngj59 
Sun May 17 03:34:19 UTC 2020
Hello from the Kubernetes cluster
[root@master1 ~]# kubectl delete -f cronjob.yaml 
cronjob.batch "hello" deleted
[root@master1 ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
dns-test                            1/1     Running   0          35m

備註:使用完cronjob要慎重,用完之後要刪掉,不然會佔用很多資源

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章