k8s,盤他!k8s的五種控制器類型解析

一:k8s的五種控制器

1.1:k8s的控制器類型

  • Kubernetes中內建了很多controller(控制器),這些相當於一個狀態機,用來控制Pod的具體狀態和行爲

1、deployment:適合無狀態的服務部署

2、StatefullSet:適合有狀態的服務部署

3、DaemonSet:一次部署,所有的node節點都會部署,例如一些典型的應用場景:

  • 運行集羣存儲 daemon,例如在每個Node上運行 glusterd、ceph
  • 在每個Node上運行日誌收集 daemon,例如 fluentd、 logstash
  • 在每個Node上運行監控 daemon,例如 Prometheus Node Exporter

4、Job:一次性的執行任務

5、Cronjob:週期性的執行任務

  • 控制器又被稱爲工作負載,pod通過控制器實現應用的運維,比如伸縮、升級等

    mark

1.2:Deployment控制器

  • 適合部署無狀態的應用服務,用來管理pod和replicaset,具有上線部署、副本設定、滾動更新、回滾等功能,還可提供聲明式更新,例如只更新一個新的Image
1.2.1:測試deployment控制器
  • 1、編寫yaml文件,並創建nginx服務pod資源

    [root@master test]# vim nginx-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      replicas: 3	'//指定副本數爲3'
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx1
            image: nginx:1.15.4
            ports:
            - containerPort: 80
    [root@master test]# kubectl create -f nginx-deployment.yaml 	'//創建pod'
    deployment.apps/nginx-deployment created
    [root@master test]# kubectl get pod -w
    NAME                                READY   STATUS    RESTARTS   AGE
    nginx-deployment-78cdb5b557-7tr9h   1/1     Running   0          13s
    nginx-deployment-78cdb5b557-kbt7m   1/1     Running   0          13s
    nginx-deployment-78cdb5b557-knd7n   1/1     Running   0          13s
    ^C[root@master test]# kubectl get all	'//可以查看所有類型的pod資源'
    NAME                                    READY   STATUS    RESTARTS   AGE
    pod/nginx-deployment-78cdb5b557-7tr9h   1/1     Running   0          44s
    pod/nginx-deployment-78cdb5b557-kbt7m   1/1     Running   0          44s
    pod/nginx-deployment-78cdb5b557-knd7n   1/1     Running   0          44s
    
    NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   11d
    
    NAME                               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/nginx-deployment   3         3         3            3           44s
    
    NAME                                          DESIRED   CURRENT   READY   AGE
    replicaset.apps/nginx-deployment-78cdb5b557   3         3         3       44s
    
    
  • 2、查看控制器參數:可以使用describe或者edit兩種方式

    [root@master test]# kubectl describe deploy nginx-deployment
    '//或者使用edit'
    [root@master test]# kubectl edit deploy nginx-deployment
    '//兩種方式都可以查看pod的更詳細的信息,包括各種類型的名稱、資源、事件等'
        ...省略內容
      strategy:
        rollingUpdate:	'//此段解釋的是滾動更新機制'
          maxSurge: 25%	'//25%指的是pod數量的百分比,最多可以擴容125%'
          maxUnavailable: 25%	'//25%指的是pod數量的百分比,最多可以縮容75%'
        type: RollingUpdate
        ...省略內容
    
  • 3、查看控制器的歷史版本,滾動更新以此爲基礎

    [root@master test]# kubectl rollout history deploy/nginx-deployment
    deployment.extensions/nginx-deployment 
    REVISION  CHANGE-CAUSE
    1         <none>	'//發現只有一個,說明沒有開始滾動更新,否則會保持2個'
    
    

1.3:SatefulSet控制器

1、適合部署有狀態應用

2、解決Pod的獨立生命週期,保持Pod啓動順序和唯一性

3、穩定,唯一的網絡標識符,持久存儲(例如:etcd配置文件,節點地址發生變化,將無法使用)

4、有序,優雅的部署和擴展、刪除和終止(例如:mysql主從關係,先啓動主,再啓動從)

5、有序,滾動更新

6、應用場景:例如數據庫

  • 無狀態服務的特點:

    1)deployment 認爲所有的pod都是一樣的

    2)不用考慮順序的要求

    3)不用考慮在哪個node節點上運行

    4)可以隨意擴容和縮容

  • 有狀態服務的特點:

    1)實例之間有差別,每個實例都有自己的獨特性,元數據不同,例如etcd,zookeeper

    2)實例之間不對等的關係,以及依靠外部存儲的應用。

  • 常規的service服務和無頭服務的區別

    service:一組Pod訪問策略,提供cluster-IP羣集之間通訊,還提供負載均衡和服務發現。

    Headless service 無頭服務,不需要cluster-IP,直接綁定具體的Pod的IP,無頭服務經常用於statefulset的有狀態部署

1.3.1:創建無頭服務的service資源和dns資源
  • 由於有狀態服務的IP地址是動態的,所以使用無頭服務的時候要綁定dns服務

  • 1、編寫yaml文件並創建service資源

    [root@master test]# vim nginx-headless.yaml
    apiVersion: v1
    kind: Service	'//創建一個service類型的資源'
    metadata:
      name: nginx-headless
      labels:
        app: nginx
    spec:
      ports:
      - port: 80
        name: web
      clusterIP: None	'//不使用clusterIP'
      selector:
        app: nginx
    [root@master test]# kubectl create -f nginx-headless.yaml 
    service/nginx-headless created
    [root@master test]# kubectl get svc		'//查看service資源'
    NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP   11d
    nginx-headless   ClusterIP   None         <none>        80/TCP    19s	'//剛剛創建的無頭服務沒有clusterIP'
    
  • 2、配置dns服務,使用yaml文件創建

    [root@master test]# vim coredns.yaml 
    # Warning: This is a file generated from the base underscore template file: coredns.yaml.base
    
    apiVersion: v1
    kind: ServiceAccount	'//系統賬戶,爲pod中的進程和外部用戶提供身份信息'
    metadata:
      name: coredns
      namespace: kube-system	'//指定名稱空間'
      labels:
          kubernetes.io/cluster-service: "true"
          addonmanager.kubernetes.io/mode: Reconcile
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole	'//創建訪問權限的角色'
    metadata:
      labels:
        kubernetes.io/bootstrapping: rbac-defaults
        addonmanager.kubernetes.io/mode: Reconcile
      name: system:coredns
    rules:
    - apiGroups:
      - ""
      resources:
      - endpoints
      - services
      - pods
      - namespaces
      verbs:
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding	'//創建集羣角色綁定的用戶'
    metadata:
      annotations:
        rbac.authorization.kubernetes.io/autoupdate: "true"
      labels:
        kubernetes.io/bootstrapping: rbac-defaults
        addonmanager.kubernetes.io/mode: EnsureExists
      name: system:coredns
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:coredns
    subjects:
    - kind: ServiceAccount
      name: coredns
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ConfigMap	'//通過此服務來更改服務發現的工作方式'
    metadata:
      name: coredns
      namespace: kube-system
      labels:
          addonmanager.kubernetes.io/mode: EnsureExists
    data:
      Corefile: |	'//是coreDNS的配置文件'
        .:53 {
            errors
            health
            kubernetes cluster.local in-addr.arpa ip6.arpa {
                pods insecure	
                upstream
                fallthrough in-addr.arpa ip6.arpa
            }
            prometheus :9153
            proxy . /etc/resolv.conf
            cache 30
            loop
            reload
            loadbalance
        }
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: coredns
      namespace: kube-system
      labels:
        k8s-app: kube-dns
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
        kubernetes.io/name: "CoreDNS"
    spec:
      # replicas: not specified here:
      # 1. In order to make Addon Manager do not reconcile this replicas parameter.
      # 2. Default is 1.
      # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 1
      selector:
        matchLabels:
          k8s-app: kube-dns
      template:
        metadata:
          labels:
            k8s-app: kube-dns
          annotations:
            seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
        spec:
          serviceAccountName: coredns
          tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
            - key: "CriticalAddonsOnly"
              operator: "Exists"
          containers:
          - name: coredns
            image: coredns/coredns:1.2.2
            imagePullPolicy: IfNotPresent
            resources:
              limits:
                memory: 170Mi
              requests:
                cpu: 100m
                memory: 70Mi
            args: [ "-conf", "/etc/coredns/Corefile" ]
            volumeMounts:
            - name: config-volume
              mountPath: /etc/coredns
              readOnly: true
            ports:
            - containerPort: 53
              name: dns
              protocol: UDP
            - containerPort: 53
              name: dns-tcp
              protocol: TCP
            - containerPort: 9153
              name: metrics
              protocol: TCP
            livenessProbe:
              httpGet:
                path: /health
                port: 8080
                scheme: HTTP
              initialDelaySeconds: 60
              timeoutSeconds: 5
              successThreshold: 1
              failureThreshold: 5
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                add:
                - NET_BIND_SERVICE
                drop:
                - all
              readOnlyRootFilesystem: true
          dnsPolicy: Default
          volumes:
            - name: config-volume
              configMap:
                name: coredns
                items:
                - key: Corefile
                  path: Corefile
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: kube-dns
      namespace: kube-system
      annotations:
        prometheus.io/port: "9153"
        prometheus.io/scrape: "true"
      labels:
        k8s-app: kube-dns
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
        kubernetes.io/name: "CoreDNS"
    spec:
      selector:
        k8s-app: kube-dns
      clusterIP: 10.0.0.2 
      ports:
      - name: dns
        port: 53
        protocol: UDP
      - name: dns-tcp
        port: 53
        protocol: TCP
    [root@master test]# kubectl create -f coredns.yaml 	
    serviceaccount/coredns created
    clusterrole.rbac.authorization.k8s.io/system:coredns created
    clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
    configmap/coredns created
    deployment.extensions/coredns created
    service/kube-dns created
    [root@master test]# kubectl get pod,svc -n kube-system	'//查看kube-system名稱空間的pod和svc資源'
    NAME                                        READY   STATUS    RESTARTS   AGE
    pod/coredns-56684f94d6-8p44x                1/1     Running   0          30s
    pod/kubernetes-dashboard-7dffbccd68-58qms   1/1     Running   2          11d
    
    NAME                           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
    service/kube-dns               ClusterIP   10.0.0.2     <none>        53/UDP,53/TCP   31s
    service/kubernetes-dashboard   NodePort    10.0.0.139   <none>        443:30005/TCP   11d
    
    
  • 3、創建一個測試的pod資源並驗證DNS解析

    [root@master test]#  kubectl run -it --image=busybox:1.28.4 --rm --restart=Never sh
    If you don't see a command prompt, try pressing enter.
    / # nslookup kubernetes
    Server:    10.0.0.2
    Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
    
    Name:      kubernetes
    Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local	'//解析成功'
    / # exit
    pod "sh" deleted
    
    
1.3.2:創建statefulset資源
  • 1、編寫yaml文件並創建資源

    [root@master test]# vim statefulset-test.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      ports:
      - port: 80
        name: web
      clusterIP: None	'//指定爲無頭服務'
      selector:
        app: nginx
    ---
    apiVersion: apps/v1beta1  
    kind: StatefulSet  
    metadata:
      name: nginx-statefulset  
      namespace: default
    spec:
      serviceName: nginx  
      replicas: 3  '//指定副本數量'
      selector:
        matchLabels:  
           app: nginx
      template:  
        metadata:
          labels:
            app: nginx  
        spec:
          containers:
          - name: nginx
            image: nginx:latest  
            ports:
            - containerPort: 80  
    [root@master test]# vim pod-dns-test.yaml 	'//創建用來測試dns的pod資源'
    apiVersion: v1
    kind: Pod
    metadata:
      name: dns-test
    spec:
      containers:
      - name: busybox
        image: busybox:1.28.4
        args:
        - /bin/sh
        - -c
        - sleep 36000
      restartPolicy: Never
    
    [root@master test]# kubectl delete -f .	'//先刪除之前所有的資源'
    
    
  • 2、創建資源並測試

    [root@master test]# kubectl create -f coredns.yaml 
    serviceaccount/coredns created
    clusterrole.rbac.authorization.k8s.io/system:coredns created
    clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
    configmap/coredns created
    deployment.extensions/coredns created
    service/kube-dns created
    [root@master test]# kubectl create -f statefulset-test.yaml 
    service/nginx created
    statefulset.apps/nginx-statefulset created
    [root@master test]# kubectl create -f pod-dns-test.yaml 
    pod/dns-test created
    [root@master test]# kubectl get pod,svc
    NAME                      READY   STATUS    RESTARTS   AGE
    pod/dns-test              1/1     Running   0          37s
    pod/nginx-statefulset-0   1/1     Running   0          56s
    pod/nginx-statefulset-1   1/1     Running   0          39s
    pod/nginx-statefulset-2   1/1     Running   0          21s
    
    NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   12d
    service/nginx        ClusterIP   None         <none>        80/TCP    56s
    [root@master test]# kubectl exec -it dns-test sh	'//登陸pod資源進行測試'
    / # nslookup pod/nginx-statefulset-0
    / # nslookup kubernetes
    Server:    10.0.0.2
    Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
    
    Name:      kubernetes
    Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
    / # nslookup nginx-statefulset-0.nginx
    Server:    10.0.0.2
    Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
    
    Name:      nginx-statefulset-0.nginx
    Address 1: 172.17.78.2 nginx-statefulset-0.nginx.default.svc.cluster.local
    / # exit
    
    
  • 相比於Deployment而言,StatefulSet是有身份的!(序列編號區分唯一身份)

    身份三要素:

    1、域名 nginx-statefulset-0.nginx

    2、主機名 nginx-statefulset-0

    3、存儲(PVC)

  • StatefulSet的有序部署和有序伸縮

    有序部署(即0到N-1)

    有序收縮,有序刪除(即從N-1到0)

    無論是部署還是刪除,更新下一個 Pod 前,StatefulSet 控制器終止每個 Pod 並等待它們變成 Running 和 Ready。

1.4:DaemonSet控制器

  • 在每一個Node上運行一個Pod

    新加入的Node也同樣會自動運行一個Pod

  • 應用場景:監控,分佈式存儲,日誌收集等

1.4.1:測試
  • 1、編寫yaml文件並創建資源

    [root@master test]# vim daemonset-test.yaml
    apiVersion: apps/v1
    kind: DaemonSet 
    metadata:
      name: nginx-daemonset
      labels:
        app: nginx
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.15.4
            ports:
            - containerPort: 80
    [root@master test]# kubectl create -f daemonset-test.yaml 
    daemonset.apps/nginx-daemonset created
    
    
  • 2、查看資源的部署情況

    [root@master test]# kubectl get pod -o wide
    NAME                    READY   STATUS    RESTARTS   AGE   IP            NODE              NOMINATED NODE
    dns-test                1/1     Running   0          18m   172.17.14.4   192.168.233.133   <none>
    nginx-daemonset-m8lm5   1/1     Running   0          41s   172.17.78.5   192.168.233.132   <none>
    nginx-daemonset-sswfq   1/1     Running   0          41s   172.17.14.5   192.168.233.133   <none>
    nginx-statefulset-0     1/1     Running   0          18m   172.17.78.2   192.168.233.132   <none>
    nginx-statefulset-1     1/1     Running   0          18m   172.17.14.3   192.168.233.133   <none>
    nginx-statefulset-2     1/1     Running   0          18m   172.17.78.4   192.168.233.132   <none>
    '//發現daemonset的資源已經分配到兩個node節點上了'
    

1.5:job控制器

  • 一次性執行任務,類似Linux中的job
  • 應用場景:如離線數據處理,視頻解碼等業務
1.5.1:測試
  • 1、編寫yaml文件並創建資源

    [root@master test]# vim job-test.yaml
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: pi
    spec:
      template:
        spec:
          containers:
          - name: pi
            image: perl
            command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]	'//命令是計算π的值'
          restartPolicy: Never
      backoffLimit: 4	'//重試次數默認是6次,修改爲4次,當遇到異常時Never狀態會重啓,所以要設定次數。'
    [root@master test]# kubectl create -f job-test.yaml 
    job.batch/pi created
    
    
  • 2、查看job資源

    [root@master test]# kubectl get pod -w
    NAME                    READY   STATUS              RESTARTS   AGE
    dns-test                1/1     Running             0          23m
    nginx-daemonset-m8lm5   1/1     Running             0          5m33s
    nginx-daemonset-sswfq   1/1     Running             0          5m33s
    nginx-statefulset-0     1/1     Running             0          23m
    nginx-statefulset-1     1/1     Running             0          23m
    nginx-statefulset-2     1/1     Running             0          23m
    pi-dhzrg                0/1     ContainerCreating   0          50s
    pi-dhzrg   1/1   Running   0     61s
    pi-dhzrg   0/1   Completed   0     65s	'//執行成功後就結束了'
    ^C[root@master test]# kubectl logs pi-dhzrg	'//查看日誌可以查看結果'
    3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
    
    
  • 3、查看並刪除job資源

    [root@master test]# kubectl get job
    NAME   COMPLETIONS   DURATION   AGE
    pi     1/1           65s        2m49s
    [root@master test]# kubectl delete -f job-test.yaml 
    job.batch "pi" deleted
    [root@master test]# kubectl get job
    No resources found.
    
    

1.6:cronjob控制器

  • 週期性任務,像Linux的Crontab一樣。
  • 應用場景:如通知,備份等
1.6.1:測試
  • 1、編寫yaml文件並創建資源(建立每分鐘打印hello的任務)

    [root@master test]# vim cronjob-test.yaml
    apiVersion: batch/v1beta1
    kind: CronJob
    metadata:
      name: hello
    spec:
      schedule: "*/1 * * * *"
      jobTemplate:
        spec:
          template:
            spec:
              containers:
              - name: hello
                image: busybox
                args:
                - /bin/sh
                - -c
                - date; echo Hello from the Kubernetes cluster
              restartPolicy: OnFailure
    [root@master test]# kubectl create -f cronjob-test.yaml 	
    cronjob.batch/hello created
    
    
  • 2、查看pod資源

    [root@master test]# kubectl get pod -w
    NAME                    READY   STATUS    RESTARTS   AGE
    dns-test                1/1     Running   0          32m
    nginx-daemonset-m8lm5   1/1     Running   0          14m
    nginx-daemonset-sswfq   1/1     Running   0          14m
    nginx-statefulset-0     1/1     Running   0          32m
    nginx-statefulset-1     1/1     Running   0          32m
    nginx-statefulset-2     1/1     Running   0          32m
    hello-1589946540-6wn5h   0/1   Pending   0     0s
    hello-1589946540-6wn5h   0/1   Pending   0     0s
    hello-1589946540-6wn5h   0/1   ContainerCreating   0     0s
    hello-1589946540-6wn5h   0/1   Completed   0     14s
    hello-1589946600-dlt4c   0/1   Pending   0     0s
    hello-1589946600-dlt4c   0/1   Pending   0     0s
    hello-1589946600-dlt4c   0/1   ContainerCreating   0     0s
    hello-1589946600-dlt4c   0/1   Completed   0     16s
    
    
  • 3、可以查看日誌信息

    ^C[root@master test]# kubectl logs hello-1589946540-6wn5h
    Wed May 20 03:49:18 UTC 2020
    Hello from the Kubernetes cluster
    [root@master test]# kubectl logs hello-1589946600-dlt4c
    Wed May 20 03:50:22 UTC 2020
    Hello from the Kubernetes cluster
    [root@master test]# kubectl delete -f cronjob-test.yaml 
    cronjob.batch "hello" deleted	'//使用cronjob要慎重,用完之後要刪掉,不然會佔用很多資源'
    
    

如有疑問可評論區交流!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章