K8s資源第一篇(Pod)

資源配置格式:
  • apiVersion:用來定義api羣組的版本。
  • Kind:用來定義資源類型。
  • metadata:用來定義元數據,元數據中包含,資源的名字,標籤,隸屬的名稱空間等。
  • sepc:
  • 用來定義資源的期望狀態。
  • status:資源的實際狀態,用戶不能夠定義,由k8s自行維護。
獲取集羣所支持的所有資源類型:
[root@k8smaster data]# kubectl api-resources
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
events                            ev                                          true         Event
limitranges                       limits                                      true         LimitRange
namespaces                        ns                                          false        Namespace
nodes                             no                                          false        Node
persistentvolumeclaims            pvc                                         true         PersistentVolumeClaim
persistentvolumes                 pv                                          false        PersistentVolume
pods                              po                                          true         Pod
podtemplates                                                                  true         PodTemplate
replicationcontrollers            rc                                          true         ReplicationController
resourcequotas                    quota                                       true         ResourceQuota
secrets                                                                       true         Secret
serviceaccounts                   sa                                          true         ServiceAccount
services                          svc                                         true         Service
mutatingwebhookconfigurations                  admissionregistration.k8s.io   false        MutatingWebhookConfiguration
validatingwebhookconfigurations                admissionregistration.k8s.io   false        ValidatingWebhookConfiguration
customresourcedefinitions         crd,crds     apiextensions.k8s.io           false        CustomResourceDefinition
apiservices                                    apiregistration.k8s.io         false        APIService
controllerrevisions                            apps                           true         ControllerRevision
daemonsets                        ds           apps                           true         DaemonSet
deployments                       deploy       apps                           true         Deployment
replicasets                       rs           apps                           true         ReplicaSet
statefulsets                      sts          apps                           true         StatefulSet
tokenreviews                                   authentication.k8s.io          false        TokenReview
localsubjectaccessreviews                      authorization.k8s.io           true         LocalSubjectAccessReview
selfsubjectaccessreviews                       authorization.k8s.io           false        SelfSubjectAccessReview
selfsubjectrulesreviews                        authorization.k8s.io           false        SelfSubjectRulesReview
subjectaccessreviews                           authorization.k8s.io           false        SubjectAccessReview
horizontalpodautoscalers          hpa          autoscaling                    true         HorizontalPodAutoscaler
cronjobs                          cj           batch                          true         CronJob
jobs                                           batch                          true         Job
certificatesigningrequests        csr          certificates.k8s.io            false        CertificateSigningRequest
leases                                         coordination.k8s.io            true         Lease
events                            ev           events.k8s.io                  true         Event
daemonsets                        ds           extensions                     true         DaemonSet
deployments                       deploy       extensions                     true         Deployment
ingresses                         ing          extensions                     true         Ingress
networkpolicies                   netpol       extensions                     true         NetworkPolicy
podsecuritypolicies               psp          extensions                     false        PodSecurityPolicy
replicasets                       rs           extensions                     true         ReplicaSet
ingresses                         ing          networking.k8s.io              true         Ingress
networkpolicies                   netpol       networking.k8s.io              true         NetworkPolicy
runtimeclasses                                 node.k8s.io                    false        RuntimeClass
poddisruptionbudgets              pdb          policy                         true         PodDisruptionBudget
podsecuritypolicies               psp          policy                         false        PodSecurityPolicy
clusterrolebindings                            rbac.authorization.k8s.io      false        ClusterRoleBinding
clusterroles                                   rbac.authorization.k8s.io      false        ClusterRole
rolebindings                                   rbac.authorization.k8s.io      true         RoleBinding
roles                                          rbac.authorization.k8s.io      true         Role
priorityclasses                   pc           scheduling.k8s.io              false        PriorityClass
csidrivers                                     storage.k8s.io                 false        CSIDriver
csinodes                                       storage.k8s.io                 false        CSINode
storageclasses                    sc           storage.k8s.io                 false        StorageClass
volumeattachments                              storage.k8s.io                 false        VolumeAttachment
Kubectl操作類型:
  • 陳述式操作:通過命令來配置資源(create,delete…)。
  • 聲明式操作:把配置清單寫在文件中,讓k8s來應用文件中的配置(kubectl apply -f xxx.yaml)。
官方操作文檔:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/
創建資源:
apply和create:
[root@k8smaster ~]# mkdir /data
[root@k8smaster ~]# cd /data/
[root@k8smaster data]# vim develop-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: develop
[root@k8smaster data]# kubectl create -f develop-ns.yaml 
namespace/develop created
[root@k8smaster data]# kubectl get ns
NAME              STATUS   AGE
default           Active   36d
develop           Active   31s
kube-node-lease   Active   36d
kube-public       Active   36d
kube-system       Active   36d
[root@k8smaster data]# cp develop-ns.yaml sample-ns.yaml
[root@k8smaster data]# vim sample-ns.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: sample
[root@k8smaster data]# kubectl apply -f sample-ns.yaml 
namespace/sample created
[root@k8smaster data]# kubectl get ns
NAME              STATUS   AGE
default           Active   36d
develop           Active   2m7s
kube-node-lease   Active   36d
kube-public       Active   36d
kube-system       Active   36d
sample            Active   4s
[root@k8smaster data]# kubectl create -f develop-ns.yaml 
Error from server (AlreadyExists): error when creating "develop-ns.yaml": namespaces "develop" already exists
[root@k8smaster data]# kubectl apply -f sample-ns.yaml 
namespace/sample unchanged
  • create:不能重複創建已經存在的資源。
  • apply:apply爲應用配置,如果資源不存在則創建,如果資源與配置清單有不一樣的地方則應用配置清單的配置,可以重複應用,而且apply還可以應用一個目錄下所有的配置文件。
輸出:
以yaml模板的形式輸出pod的配置:
[root@k8smaster data]# kubectl get pods/nginx-deployment-6f77f65499-8g24d -o yaml --export
Flag --export has been deprecated, This flag is deprecated and will be removed in future.
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  generateName: nginx-deployment-6f77f65499-
  labels:
    app: nginx-deployment
    pod-template-hash: 6f77f65499
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: nginx-deployment-6f77f65499
    uid: c22cc3e8-8fbe-420f-b517-5a472ba1ddef
  selfLink: /api/v1/namespaces/default/pods/nginx-deployment-6f77f65499-8g24d
spec:
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: nginx
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-kk2fq
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: k8snode1
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-kk2fq
    secret:
      defaultMode: 420
      secretName: default-token-kk2fq
status:
  phase: Pending
  qosClass: BestEffort
運行多個容器:
[root@k8smaster data]# vim pod-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
spec:
  containers:
  - name: bbox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","sleep 86400"]
  - name: myapp
    image: ikubernetes/myapp:v1
[root@k8smaster data]# kubectl apply -f pod-demo.yaml 
pod/pod-demo created
[root@k8smaster data]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
myapp-deployment-558f94fb55-plk4v   1/1     Running   1          33d
myapp-deployment-558f94fb55-rd8f5   1/1     Running   1          33d
myapp-deployment-558f94fb55-zzmpg   1/1     Running   1          33d
nginx-deployment-6f77f65499-8g24d   1/1     Running   1          33d
pod-demo                            2/2     Running   0          83s
進入容器中:
kubectl exec
  • pod-demo:pod的名字。
  • -c:如果一個pod中有多個容器,需要使用-c 指定容器名來進入指定的容器。
  • -n:指定名稱空間。
  • -it:交互式界面。
  • – /bin/sh:運行的shell。

[root@k8smaster data]# kubectl exec pod-demo -c bbox -n default -it  -- /bin/sh
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr DE:5A:59:84:21:8D  
        inet addr:10.244.1.105  Bcast:0.0.0.0  Mask:255.255.255.0    
        UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
        RX packets:13 errors:0 dropped:0 overruns:0 frame:0
        TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
        collisions:0 txqueuelen:0 
        RX bytes:978 (978.0 B)  TX bytes:42 (42.0 B)

lo        Link encap:Local Loopback  
        inet addr:127.0.0.1  Mask:255.0.0.0
        UP LOOPBACK RUNNING  MTU:65536  Metric:1
        RX packets:0 errors:0 dropped:0 overruns:0 frame:0
        TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
        collisions:0 txqueuelen:1000 
        RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

 bbox容器中的ip爲Pod的ip,一個Pod中所有的容器都是通過共享一個ip來實現訪問。

/ # netstat -tnl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      
/ # ps aux
PID   USER     TIME  COMMAND
    1 root      0:00 sleep 86400
    6 root      0:00 /bin/sh
    13 root      0:00 ps aux

 通過netstat -tnl命令查看容器監聽的端口爲80,顯然80不是bbox中的端口,而是myapp容器的80端口。由此證明了一個Pod中所有的容器都使用了一個共享ip。

/ # wget -O -  -q 127.0.0.1   #並且訪問本地的端口,會出現myapp的頁面。
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
/ # exit
查看指定容器的日誌:
kubectl logs:
  • pod-demo:pod的名字。
  • -c:如果一個pod中有多個容器,需要使用-c 指定容器名來進入指定的容器。
  • -n:指定名稱空間。

[root@k8smaster data]# kubectl logs pod-demo -n default -c myapp 
127.0.0.1 - - [04/Dec/2019:06:36:04 +0000] "GET / HTTP/1.1" 200 65 "-" "Wget" "-"
127.0.0.1 - - [04/Dec/2019:06:36:11 +0000] "GET / HTTP/1.1" 200 65 "-" "Wget" "-"
127.0.0.1 - - [04/Dec/2019:06:36:17 +0000] "GET / HTTP/1.1" 200 65 "-" "Wget" "-"
Pod中的容器和節點主機共享同一個網絡:

 不建議在生產過程中使用,如果容器過多,可能造成端口號衝突。

[root@k8smaster data]# vim host-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mypod
  namespace: default
spec:
  containers:
  - name : myapp
    image: ikubernetes/myapp:v1
  hostNetwork: true
[root@k8smaster data]# kubectl apply -f host-pod.yaml 
pod/mypod created
[root@k8smaster data]# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
myapp-deployment-558f94fb55-plk4v   1/1     Running   1          33d   10.244.2.94      k8snode2   <none>           <none>
myapp-deployment-558f94fb55-rd8f5   1/1     Running   1          33d   10.244.2.95      k8snode2   <none>           <none>
myapp-deployment-558f94fb55-zzmpg   1/1     Running   1          33d   10.244.1.104     k8snode1   <none>           <none>
mypod                               1/1     Running   0          9s    192.168.43.176   k8snode2   <none>           <none>
nginx-deployment-6f77f65499-8g24d   1/1     Running   1          33d   10.244.1.103     k8snode1   <none>           <none>
pod-demo                            2/2     Running   0          26m   10.244.1.105     k8snode1   <none>           <none>
[root@k8smaster data]# curl 192.168.43.176:80
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
hostPort:
[root@k8smaster data]# kubectl delete -f host-pod.yaml 
pod "mypod" deleted
[root@k8smaster data]# vim host-pod.yaml    #映射容器的80端口到容器運行的主機上的8080端口。

apiVersion: v1
kind: Pod
metadata:
  name: mypod
  namespace: default
spec:
  containers:
  - name : myapp
    image: ikubernetes/myapp:v1
    ports:
    - protocol: TCP
      containerPort: 80
      name: http
      hostPort: 8080
[root@k8smaster data]# kubectl apply -f host-pod.yaml 
pod/mypod created
[root@k8smaster data]# kubectl get pods -o wide   #發現myapp是運行在node2節點上,我們直接訪問node2的8080端口。
NAME                                READY   STATUS    RESTARTS   AGE   IP             NODE       NOMINATED NODE   READINESS GATES
myapp-deployment-558f94fb55-plk4v   1/1     Running   1          33d   10.244.2.94    k8snode2   <none>           <none>
myapp-deployment-558f94fb55-rd8f5   1/1     Running   1          33d   10.244.2.95    k8snode2   <none>           <none>
myapp-deployment-558f94fb55-zzmpg   1/1     Running   1          33d   10.244.1.104   k8snode1   <none>           <none>
mypod                               1/1     Running   0          23s   10.244.2.96    k8snode2   <none>           <none>
nginx-deployment-6f77f65499-8g24d   1/1     Running   1          33d   10.244.1.103   k8snode1   <none>           <none>
pod-demo                            2/2     Running   0          37m   10.244.1.105   k8snode1   <none>           <none>
[root@k8smaster data]# curl 192.168.43.176:8080   
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
可以實現外部訪問Pod的方式:
  • Containers/hostPort:端口映射,將容器的端口映射到運行這個容器的節點主機上。
  • hostNetwork:Pod和本機共享同一個網絡。
  • Nodeport:在每個節點上都開放一個特定的端口,任何發送到該端口的流量都被轉發到對應服務,端口的範圍在30000~32767。
標籤:

 標籤就是“鍵值”類型的數據,他們可用於資源創建時直接指定,也可隨時按需添加與活動對象,而後即可由標籤選擇器進行匹配度檢查從而完成資源挑選。

  • 一個對象可擁有不止一個標籤,而同一個標籤也可以被添加至多個資源之上。
  • 實踐中,可以爲資源附加多個不同緯度的標籤以實現靈活的資源分組管理功能,例如版本標籤,環境標籤,分層架構標籤等,用於交叉標識同一個資源所屬的不同版本,環境以及架構層級等。
  • 標籤中的鍵名通常由鍵前綴和鍵名組成,其中鍵前綴可選,其格式形如“KEY_PREFIX/KEY_NAME”。
    • 鍵名最多能使用63個字符,可使用字母,數字,連接號,下劃線,點號等字符,且只能以字母或數字開頭。
    • 鍵前綴必須爲DNS子域名格式,且不能超過253個字符,省略鍵前綴時,鍵將被視爲用戶的私有數據,不過由k8s系統組件或第三方組件自動爲用戶資源添加的鍵必須使用鍵前綴,而“kubernetes.io/”前綴預留給kubernetes的核心組件使用。
    • 標籤中的鍵值必須不能多於63個字符,它要麼爲空,要麼是以字母或數字開頭及結尾,且中間僅使用了字母,數字,連接號,下劃線,點號等字符的數據。
  • rel:釋出的版本。
  • stable:穩定版。
  • beta:測試版。
  • canary:金絲雀版。
標籤選擇器:

 標籤選擇器用於表達標籤的查詢條件或選擇標準,Kubernetes API目前支持兩個選擇器。

  • 基於等值關係(equality-based)
    • 操作符有=,==和!=三種,其中前兩個意義相同,都表示有“等值”關係,最後一個表示“不等”關係。
  • 基於集合關係(set-based)
    • KEY in (Value1,Value2…)
    • KEY not in (Vlaue1,Value2…)
    • KEY:所有存在此鍵名標籤的資源。
    • !KEY:所有不存在此鍵名標籤的資源。

 使用標籤選擇遵循的邏輯:

  • 同時指定的多個選擇器之間的邏輯關係爲“與”操作。
  • 使用空值的標籤選擇器意味着每個資源對象都將被選中。
  • 空的標籤選擇器將無法選出任何資源。

定義標籤選擇器的方式:

kubernetes的諸多資源對象必須以標籤選擇器的方式關聯到Pod資源對象,例如Service,Deployment和ReplicaSet類型的資源等,它們在spec字段中嵌套使用嵌套的“selector”字段,通過“matchLabbels”來指定標籤選擇器,有的甚至還支持使用“matchExpressions”構造複雜的標籤選擇機制。

  • matchLabels:通過直接給定鍵值對指定標籤選擇器。
  • matchExpressions:基於表達式指定的標籤選擇器列表,每個選擇器形如“{key:KEY_NAME, operator:OPERATOR,values:[VALUE1,VALUE2…]}”,選擇器列表間爲“邏輯與關係”。
    • 使用In或者NotIn操作時,其value非必須爲空的字符串列表,而使用Exists或者DostNotExist時,其values必須爲空。
管理標籤(創建,修改,刪除):
[root@k8smaster ~]# kubectl get pods --show-labels    #查看pod資源的標籤。
NAME                                READY   STATUS    RESTARTS   AGE   LABELS
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
mypod                               1/1     Running   1          45h   <none>
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   app=nginx-deployment,pod-template-hash=6f77f65499
pod-demo                            2/2     Running   2          46h   <none>
[root@k8smaster data]# vim pod-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels:           
    app: pod-demo        #定義一個app標籤,值爲pod-demo。
    rel: stable          #定義一個rel標籤,職位stable。
spec:
  containers:
  - name: bbox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","sleep 86400"]
  - name: myapp
    image: ikubernetes/myapp:v1
[root@k8smaster data]# kubectl apply -f pod-demo.yaml  #創建標籤。
pod/pod-demo configured
[root@k8smaster data]# kubectl get pods --show-labels
NAME                                READY   STATUS    RESTARTS   AGE   LABELS
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
mypod                               1/1     Running   1          46h   <none>
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   app=nginx-deployment,pod-template-hash=6f77f65499
pod-demo                            2/2     Running   2          46h   app=pod-demo,rel=stable
[root@k8smaster data]# kubectl label pods pod-demo -n default tier=frontend  #使用陳述式語句添加標籤。
pod/pod-demo labeled
[root@k8smaster data]# kubectl get pods --show-labels
NAME                                READY   STATUS    RESTARTS   AGE   LABELS
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
mypod                               1/1     Running   1          46h   <none>
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   app=nginx-deployment,pod-template-hash=6f77f65499
pod-demo                            2/2     Running   2          47h   app=pod-demo,rel=stable,tier=frontend
[root@k8smaster data]# kubectl label pods pod-demo -n default --overwrite app=myapp   #覆蓋(修改)已有的標籤。
pod/pod-demo labeled
[root@k8smaster data]# kubectl get pods --show-labels
NAME                                READY   STATUS    RESTARTS   AGE   LABELS
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
mypod                               1/1     Running   1          46h   <none>
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   app=nginx-deployment,pod-template-hash=6f77f65499
pod-demo                            2/2     Running   2          47h   app=myapp,rel=stable,tier=frontend
[root@k8smaster data]# kubectl label pods pod-demo -n default rel-  #刪除指定的標籤,只需要在標籤名後面加一個減號。
pod/pod-demo labeled
[root@k8smaster data]# kubectl get pods --show-labels
NAME                                READY   STATUS    RESTARTS   AGE   LABELS
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
mypod                               1/1     Running   1          46h   <none>
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   app=nginx-deployment,pod-template-hash=6f77f65499
pod-demo                            2/2     Running   2          47h   app=myapp,tier=frontend
使用標籤過濾資源:
#查看標籤app等於myapp的資源。
[root@k8smaster data]# kubectl get pods -n default -l app=myapp  
NAME       READY   STATUS    RESTARTS   AGE
pod-demo   2/2     Running   2          47h

#查看標籤app不等於myapp的資源。
[root@k8smaster data]# kubectl get pods -n default -l app!=myapp --show-labels 
NAME                                READY   STATUS    RESTARTS   AGE   LABELS
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
mypod                               1/1     Running   1          46h   <none>
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   app=nginx-deployment,pod-template-hash=6f77f65499

#查看app標籤爲nginx-deployment和myapp-deployment的資源。
[root@k8smaster data]# kubectl get pods -n default -l "app in (nginx-deployment,myapp-deployment)" --show-labels  
NAME                                READY   STATUS    RESTARTS   AGE   LABELS
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   app=nginx-deployment,pod-template-hash=6f77f65499

#-L:將app標籤以字段的形式輸出信息
[root@k8smaster data]# kubectl get pods -n default -l "app in (nginx-deployment,myapp-deployment)"  -L app
NAME                                READY   STATUS    RESTARTS   AGE   APP
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   myapp-deployment
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   myapp-deployment
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   myapp-deployment
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   nginx-deployment

#查看標籤app不是nginx-deployment和myapp-deployment的資源。
[root@k8smaster data]# kubectl get pods -n default -l "app notin (nginx-deployment,myapp-deployment)"  -L app
NAME       READY   STATUS    RESTARTS   AGE   APP
mypod      1/1     Running   1          47h   
pod-demo   2/2     Running   2          47h   myapp

#查看存在app標籤的資源。
[root@k8smaster data]# kubectl get pods -n default -l "app"  -L app
NAME                                READY   STATUS    RESTARTS   AGE   APP
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   myapp-deployment
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   myapp-deployment
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   myapp-deployment
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   nginx-deployment
pod-demo                            2/2     Running   2          47h   myapp

#查看不存在app標籤的資源。
[root@k8smaster data]# kubectl get pods -n default -l '!app'  -L app
NAME    READY   STATUS    RESTARTS   AGE   APP
mypod   1/1     Running   1          47h   
資源註解(annotation):

 註解也是“鍵值”類型的數據,不過它不能用於標籤及挑選kubernetes對象,僅用於對資源提供“元數據”信息。

 註解中的元數據不受字符數量的限制,它可大可小,可以爲結構化或非結構話形式,也支持使用在標籤中禁止使用的其他字符。

 在kubernetes的新版本中(Alpha或Beta階段)爲某資源引入新字段時,常以註解方式提供以避免其增刪等變動給用戶帶來困擾,一旦確定支持使用它們,這些新增字段再引入到資源中並淘汰相關的註解。

[root@k8smaster data]# vim pod-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels:           
    app: pod-demo        
    rel: stable
  annotations:
    ik8s.io/project: hello
spec:
  containers:
  - name: bbox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","sleep 86400"]
  - name: myapp
    image: ikubernetes/myapp:v1
[root@k8smaster data]# kubectl apply -f pod-demo.yaml 
pod/pod-demo configured
[root@k8smaster data]# kubectl describe pods pod-demo -n default
Name:         pod-demo
Namespace:    default
Priority:     0
Node:         k8snode1/192.168.43.136
Start Time:   Wed, 04 Dec 2019 14:20:01 +0800
Labels:       app=pod-demo
                rel=stable
                tier=frontend
Annotations:  ik8s.io/project: hello
                kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"ik8s.io/project":"hello"},"labels":{"app":"pod-demo","rel":"stable"},"name":"p...
Status:       Running
IP:           10.244.1.106
IPs:          <none>
Pod生命週期:
狀態:
  • 掛起(Pending):Pod已被Kubernetes系統接受,但有一個或多個容器鏡像尚未創建,需要等待調度Pod和下載鏡像的時間,Pod才能運行。
  • 運行中(Running):該Pod已經綁定到了一個節點上,Pod中所有的容器都已被創建,至少有一個容器正在運行,或者正在處於啓動和重啓的狀態。
  • 成功(Succeeded):Pod中所有的容器都被成功終止,並且不會重啓。
  • 失敗(Failed):Pod中的所有容器都已經終止了,並且至少有一個容器是唯一失敗終止。
  • 未知(Unkonwn):因爲某些原因無法取得Pod的狀態,通常是因爲Pod與所在主機通信失敗。
Pod啓動過程:
  • 用戶通過命令或者yaml文件創建Pod,提交請求給API Server。
  • API Server將Pod的配置等信息存儲到etcd中。
  • API Server再把Watch(監控)時間通知給Scheduler。
  • Scheduler通過調度算法來選擇目標主機。
  • API Server再把調度的主機的信息存儲到etcd中。
  • API Server去通知調度的主機的Kubelet,Kubelet再去etcd中讀取對應Pod的配置和屬性,然後把這些配置和屬性信息交給docker引擎。
  • docker引擎會啓動出一個容器,啓動成功後會把容器的信息提交給API Server。
  • API Server再把Pod的實際狀態保存到etcd中。
  • 之後如果實際狀態和期望狀態有變動,它們之間又會進行一系列的協同工作。
  • 完成容器的初始化。
  • 在主容器啓動之後執行post start的內容。
  • 在主容器運行後做探測(LiveinessProbe和ReadinessProbe)。
容器探針:
  • LivenessProbe:指示容器是否正在運行,如果存活探測失敗,則kubelet會殺死容器,並且容器將收到重啓策略,如果重啓失敗,會繼續重啓,重啓的間隔時間會依次遞增,指定成功爲止。如果容器不提供存活探針,則默認狀態爲Success。
  • ReadinessProbe:指示而哦那個其是否準備好服務請求,如果就緒探測失敗,端點控制器將從Pod匹配的所有Service的端點中刪除該Pod的IP地址,初始延遲之前的就緒狀態默認爲Failure,如果容器不提供就緒探針,則默認狀態爲Success。
探針的三種處理程序:
  • ExecAction:在容器內執行指定命令,如果命令退出時返回代碼爲0,則認爲診斷成功。
  • HTTPGetAction:對指定的端口和路徑上的容器IP地址執行HTTP Get請求,如果響應的狀態碼大於等於200且小於400,則認爲診斷是成功的。
  • TCPSocketAction:對指定端口上的容器的IP地址進行TCP檢查,如果端口打開,則診斷被認爲是成功的。
探測狀態結構:
  • 成功:容器通過了診斷。
  • 失敗:容器未通過診斷。
  • 未知:診斷失敗,不會採取任何行動。
健康狀態檢測LivenessProbe:
ExecAction實例:

 我們通過touch創建一個檢查文件,並且使用test命令驗證文件是否存在,存在則存活。
在創建完文件之後,容器會睡眠30秒,30秒過後容器會刪除這個文件,這時候test命令的返回結果爲1,認爲容器沒有存活,會對容器進行重啓策略,重啓後後容器會重新創建好原來的文件,然後test命令返回0,確定重啓成功。

[root@k8smaster data]# git clone https://github.com/iKubernetes/Kubernetes_Advanced_Practical.git
Cloning into 'Kubernetes_Advanced_Practical'...
remote: Enumerating objects: 489, done.
remote: Total 489 (delta 0), reused 0 (delta 0), pack-reused 489
Receiving objects: 100% (489/489), 148.75 KiB | 110.00 KiB/s, done.
Resolving deltas: 100% (122/122), done.
[root@k8smaster chapter4]# cat liveness-exec.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness-exec
  name: liveness-exec
spec:
  containers:
  - name: liveness-demo
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    livenessProbe:
      exec:
        command:
        - test
        - -e
        - /tmp/healthy
[root@k8smaster chapter4]# kubectl describe pods liveness-exec
Name:         liveness-exec
Namespace:    default
Priority:     0
Node:         k8snode2/192.168.43.176
Start Time:   Fri, 06 Dec 2019 15:27:55 +0800
Labels:       test=liveness-exec
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test":"liveness-exec"},"name":"liveness-exec","namespace":"default...
Status:       Running
IP:           10.244.2.100
IPs:          <none>
Containers:
  liveness-demo:
    Container ID:  docker://f91cec7c45f5a025e049b2d2e0b1dc15593e9c35f183a9a9aa8e09d40f22df4f
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:24fd20af232ca4ab5efbf1aeae7510252e2b60b15e9a78947467340607cd2ea2
    Port:          <none>
    Host Port:     <none>
    Args:
      /bin/sh
      -c
      touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    State:          Running
      Started:      Fri, 06 Dec 2019 15:28:02 +0800
    Ready:          True
    Restart Count:  0   #這時候重啓次數爲0,等待30秒後,我們再查看一次
    Liveness:       exec [test -e /tmp/healthy] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
Mounts:
  /var/run/secrets/kubernetes.io/serviceaccount from default-token-kk2fq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-kk2fq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-kk2fq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
             node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  43s   default-scheduler  Successfully assigned default/liveness-exec to k8snode2
  Normal   Pulling    42s   kubelet, k8snode2  Pulling image "busybox"
  Normal   Pulled     36s   kubelet, k8snode2  Successfully pulled image "busybox"
  Normal   Created    36s   kubelet, k8snode2  Created container liveness-demo
  Normal   Started    36s   kubelet, k8snode2  Started container liveness-demo
  Warning  Unhealthy  4s    kubelet, k8snode2  Liveness probe failed:
[root@k8smaster chapter4]# kubectl describe pods liveness-exec
Name:         liveness-exec
Namespace:    default
Priority:     0
Node:         k8snode2/192.168.43.176
Start Time:   Fri, 06 Dec 2019 15:27:55 +0800
Labels:       test=liveness-exec
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test":"liveness-exec"},"name":"liveness-exec","namespace":"default...
Status:       Running
IP:           10.244.2.100
IPs:          <none>
Containers:
  liveness-demo:
    Container ID:  docker://96f7dfd4ef1df503152542fdd1336fd0153773fb7dde3ed32f4388566888d6f0
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:24fd20af232ca4ab5efbf1aeae7510252e2b60b15e9a78947467340607cd2ea2
    Port:          <none>
    Host Port:     <none>
    Args:
      /bin/sh
      -c
      touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    State:          Running
      Started:      Fri, 06 Dec 2019 15:29:25 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Fri, 06 Dec 2019 15:28:02 +0800
      Finished:     Fri, 06 Dec 2019 15:29:24 +0800
    Ready:          True
    Restart Count:  1    #發現重啓次數變爲1。
    Liveness:       exec [test -e /tmp/healthy] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kk2fq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-kk2fq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-kk2fq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  2m25s                default-scheduler  Successfully   assigned default/liveness-exec to k8snode2
  Normal   Killing    86s                  kubelet, k8snode2  Container liveness-demo failed liveness probe, will be restarted
  Normal   Pulling    56s (x2 over 2m24s)  kubelet, k8snode2  Pulling image "busybox"
  Normal   Pulled     56s (x2 over 2m18s)  kubelet, k8snode2  Successfully pulled image "busybox"
  Normal   Created    56s (x2 over 2m18s)  kubelet, k8snode2  Created container liveness-demo
  Normal   Started    55s (x2 over 2m18s)  kubelet, k8snode2  Started container liveness-demo
  Warning  Unhealthy  6s (x5 over 106s)    kubelet, k8snode2  Liveness probe failed:
[root@k8smaster chapter4]# kubectl delete -f liveness-exec.yaml 
pod "liveness-exec" deleted
HTTPGetAction實例:

 我們通過postStart在nginx容器啓動成功之後,在網頁文件的根目錄下創建一個健康檢查文件,然後通過HTTPGet來對這個健康檢查文件進行請求,如果請求成功則認爲服務是健康的,否則服務是不健康的。

[root@k8smaster chapter4]# cat liveness-http.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-http
spec:
  containers:
  - name: liveness-demo
    image: nginx:1.12-alpine
    ports:
    - name: http
      containerPort: 80
    lifecycle:
      postStart:
        exec:
          command:
          - /bin/sh
          - -c
          - 'echo Healty > /usr/share/nginx/html/healthz'
    livenessProbe:
      httpGet:
        path: /healthz
        port: http
        scheme: HTTP
      periodSeconds: 2          #健康檢查間隔時間爲2秒
      failureThreshold: 2       #錯誤失敗次數,超過2次就不進行重啓。
      initialDelaySeconds: 3     #初始化延遲時間,會在容器啓動成功後等待3秒進行初始化
[root@k8smaster chapter4]# kubectl get pods 
NAME                                READY   STATUS    RESTARTS   AGE
liveness-http                       1/1     Running   0          23s
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d
mypod                               1/1     Running   1          2d1h
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d
pod-demo                            2/2     Running   2          2d2h
[root@k8smaster chapter4]# kubectl describe pods liveness-http
Name:         liveness-http
Namespace:    default
Priority:     0
Node:         k8snode2/192.168.43.176
Start Time:   Fri, 06 Dec 2019 16:28:26 +0800
Labels:       test=liveness
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test":"liveness"},"name":"liveness-http","namespace":"default"},"s...
Status:       Running
IP:           10.244.2.101
IPs:          <none>
  Containers:
    liveness-demo:
    Container ID:   docker://cc2d4ad2e37ec04b0d629c15d3033ecf9d4ab7453349ab40def9eb8cfca28936
    Image:          nginx:1.12-alpine
    Image ID:       docker-pullable://nginx@sha256:3a7edf11b0448f171df8f4acac8850a55eff30d1d78c46cd65e7bc8260b0be5d
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 06 Dec 2019 16:28:27 +0800
    Ready:          True
    Restart Count:  0     #這時候重啓次數爲0,我們進入容器中,人爲刪掉檢查文件測試一下。
    Liveness:       http-get http://:http/healthz delay=3s timeout=1s period=2s #success=1 #failure=2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kk2fq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
default-token-kk2fq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-kk2fq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  70s   default-scheduler  Successfully assigned default/liveness-http to k8snode2
  Normal  Pulling    70s   kubelet, k8snode2  Pulling image "nginx:1.12-alpine"
  Normal  Pulled     69s   kubelet, k8snode2  Successfully pulled image "nginx:1.12-alpine"
  Normal  Created    69s   kubelet, k8snode2  Created container liveness-demo
  Normal  Started    69s   kubelet, k8snode2  Started container liveness-demo
[root@k8smaster chapter4]# kubectl exec -it liveness-http -- /bin/sh
/ # cd /usr/share/nginx/html/
/usr/share/nginx/html # ls
50x.html    healthz     index.html
/usr/share/nginx/html # rm -rf healthz 
/usr/share/nginx/html # exit
[root@k8smaster chapter4]# kubectl exec -it liveness-http -- /bin/sh
/ # cd /usr/share/nginx/html/
/usr/share/nginx/html # ls
50x.html    healthz     index.html
/usr/share/nginx/html # rm -rf healthz 
/usr/share/nginx/html # exit
[root@k8smaster chapter4]# kubectl describe pods liveness-http
Name:         liveness-http
Namespace:    default
Priority:     0
Node:         k8snode2/192.168.43.176
Start Time:   Fri, 06 Dec 2019 16:28:26 +0800
Labels:       test=liveness
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test":"liveness"},"name":"liveness-http","namespace":"default"},"s...
Status:       Running
IP:           10.244.2.101
IPs:          <none>
  Containers:
    liveness-demo:
    Container ID:   docker://90f3016f707bcfc0e22c42dac54f4e4691e74db7fcc5d5d395f22c482c9ea704
    Image:          nginx:1.12-alpine
    Image ID:       docker-pullable://nginx@sha256:3a7edf11b0448f171df8f4acac8850a55eff30d1d78c46cd65e7bc8260b0be5d
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 06 Dec 2019 16:34:34 +0800
    Last State:     Terminated
      Reason:       Completed
    Exit Code:    0
    Started:      Fri, 06 Dec 2019 16:28:27 +0800
    Finished:     Fri, 06 Dec 2019 16:34:33 +0800
    Ready:          True
    Restart Count:  1  #發現重啓次數變爲1。
    Liveness:       http-get http://:http/healthz delay=3s timeout=1s period=2s #success=1 #failure=2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kk2fq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-kk2fq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-kk2fq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  6m24s                default-scheduler  Successfully assigned default/liveness-http to k8snode2
  Normal   Pulling    6m24s                kubelet, k8snode2  Pulling image "nginx:1.12-alpine"
  Normal   Pulled     6m23s                kubelet, k8snode2  Successfully pulled image "nginx:1.12-alpine"
  Warning  Unhealthy  17s (x2 over 19s)    kubelet, k8snode2  Liveness probe failed: HTTP probe failed with statuscode: 404
  Normal   Killing    17s                  kubelet, k8snode2  Container liveness-demo failed liveness probe, will be restarted
  Normal   Pulled     17s                  kubelet, k8snode2  Container image "nginx:1.12-alpine" already present on machine
  Normal   Created    16s (x2 over 6m23s)  kubelet, k8snode2  Created container liveness-demo
  Normal   Started    16s (x2 over 6m23s)  kubelet, k8snode2  Started container liveness-demo
[root@k8smaster chapter4]# kubectl exec -it liveness-http -- /bin/sh  #再次查看檢查文件,發現又被創建了。
/ # cd /usr/share/nginx/html/
/usr/share/nginx/html # ls
50x.html    healthz     index.html
/usr/share/nginx/html # exit
[root@k8smaster chapter4]# kubectl delete -f liveness-http.yaml 
pod "liveness-http" deleted
就緒狀態檢測ReadinessProbe:
ExecAction實例:
[root@k8smaster chapter4]# cat readiness-exec.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: readiness-exec
  name: readiness-exec
spec:
  containers:
  - name: readiness-demo
    image: busybox
    args: ["/bin/sh", "-c", "while true; do rm -f /tmp/ready; sleep 30; touch /tmp/ready; sleep 300; done"] 
    readinessProbe:
      exec:
        command: ["test", "-e", "/tmp/ready"]
      initialDelaySeconds: 5
      periodSeconds: 5
[root@k8smaster chapter4]# kubectl apply -f readiness-exec.yaml 
pod/readiness-exec created
[root@k8smaster chapter4]# kubectl get pods 
NAME                                READY   STATUS    RESTARTS   AGE
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d
mypod                               1/1     Running   1          2d1h
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d
pod-demo                            2/2     Running   2          2d2h
readiness-exec                      1/1     Running   0          57s
root@k8smaster chapter4]# kubectl exec readiness-exec -- rm -f /tmp/ready
[root@k8smaster chapter4]# kubectl get pods 
NAME                                READY   STATUS    RESTARTS   AGE
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d
mypod                               1/1     Running   1          2d1h
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d
pod-demo                            2/2     Running   2          2d2h
readiness-exec                      0/1     Running   0          2m46s
[root@k8smaster chapter4]# kubectl exec readiness-exec -- touch /tmp/ready
[root@k8smaster chapter4]# kubectl get pods 
NAME                                READY   STATUS    RESTARTS   AGE
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d
mypod                               1/1     Running   1          2d1h
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d
pod-demo                            2/2     Running   2          2d2h
readiness-exec                      1/1     Running   0          3m54s
容器的重啓策略:

 Pod對象因容器程序崩潰或容器申請超出限制的資源等原因都可能導致其被終止,此時是否應該重建它則取決與其重啓策略(restartPolicy)屬性的定義。

  • Always:但凡Pod對象終止就將其重啓,此爲默認設定。
  • OnFailure:盡在Pod對象出現錯誤時,纔將其重啓。
  • Never:從不重啓。
Pod的終止過程:
  • 用戶請求去刪除一個Pod,提交給API Server。
  • API Server將刪除信息記錄在etcd中,但是由於寬限期的原因,不會立即進行刪除操作。
  • 之後在返回給API Server,API Server將該Pod標記爲要終止的狀態(Terminating),並將其通知給對應節點的kubelet。
  • kubelet在與docker引擎通信,docker引擎再去終止對應的容器,並且在主容器終止之前執行pre stop的內容,運行結束後,Docker引擎就會停止容器,並且返回給API Server。
  • API Server再去通知端點控制器(EndPoint Controller),端點控制器從端點列表中將該Pod的信息移除,之後通知但是如果超出寬限期時間,就會向容器進程發送KILL信號殺死容器,殺死之後會反饋給API Server。
限制Pod的權限(安全上下文):

 Security Context的目的是限制不可信容器的行爲,保護系統或者系統內的其他容器不受器影響。

K8s提供了三種配置Security Context的方法:

  • Container-level Security Context:僅應用到指定容器。
  • Pod-level Security Context:應用到Pod內所有容器以及volume。
  • Pod Security Policies:應用到集羣內部所有Pod及Volume。
Pod資源限制:
容器計算資源的配額:
  • CPU屬於可壓縮型資源,即資源額度可按需求收縮,而內存(當前)則是不可壓縮型資源。
  • CPU資源的計量方式:
    • 一個核心相當於1000個微核心,即1=1000m,0.5=500m。
  • 內存資源的計量方式:
    • 默認單位爲字節,也可以使用E,P,T,G,M和K後綴單位,或Ei,Pi,Ti,Gi,Mi,和Ki形式的單位後綴。
限制方法:
  • priorityClassName:用來限制Pod使用操作系統資源的優先級。
  • resource:用來限制Pod使用cpu,內存,硬盤等資源的範圍(上限~下限)。
    • limits:上限。
    • requests:下限。

[root@k8smaster chapter4]# cat stress-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: stress-pod
spec:
  containers:
  - name: stress
    image: ikubernetes/stress-ng
    command: ["/usr/bin/stress-ng", "-c 1", "-m 1", "--metrics-brief"]
    resources:
      requests:
        memory: "128Mi"
        cpu: "200m"
      limits:
        memory: "512Mi"
        cpu: "400m"
[root@k8smaster chapter4]# cat memleak-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: memleak-pod 
spec:
  containers:
  - name: simmemleak
    image: saadali/simmemleak
    resources:
      requests:
        memory: "64Mi"
        cpu: "1"
      limits:
        memory: "64Mi"
        cpu: "1"
[root@k8smaster chapter4]# kubectl apply -f memleak-pod.yaml 
pod/memleak-pod created
[root@k8smaster chapter4]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
memleak-pod                         0/1     ContainerCreating   0          6s
myapp-deployment-558f94fb55-plk4v   1/1     Running             3          39d
myapp-deployment-558f94fb55-rd8f5   1/1     Running             3          39d
myapp-deployment-558f94fb55-zzmpg   1/1     Running             3          39d
mypod                               1/1     Running             2          6d
nginx-deployment-6f77f65499-8g24d   1/1     Running             3          39d
pod-demo                            2/2     Running             4          6d1h
readiness-exec                      1/1     Running             1          3d22h
[root@k8smaster chapter4]# kubectl describe memleak-pod 
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  76s                default-scheduler  Successfully assigned default/memleak-pod to k8snode1
Normal   Pulling    21s (x4 over 75s)  kubelet, k8snode1  Pulling image "saadali/simmemleak"
Normal   Pulled     20s (x4 over 64s)  kubelet, k8snode1  Successfully pulled image "saadali/simmemleak"
Normal   Created    20s (x4 over 64s)  kubelet, k8snode1  Created container simmemleak
Normal   Started    20s (x4 over 64s)  kubelet, k8snode1  Started container simmemleak
Warning  BackOff    9s (x7 over 62s)   kubelet, k8snode1  Back-off restarting failed container     #由於我們給的內存太小了,會啓動失敗。
[root@k8smaster chapter4]# kubectl describe pod memleak-pod  | grep Reason
  Reason:       CrashLoopBackOff 
  Reason:       OOMKilled     #啓動失敗的原因是內存耗盡。
Type     Reason     Age                    From               Message
Pod服務質量類別:

 根據Pod對象的requests和limits屬性,Kubernetest把Pod對象歸類到BestEffort,Burstable和Guaranteed三個服務質量(Quality of Service,QoS)類別下:

  • Guaranteed:每個容器都爲CPU資源設置了具有相同值的requests和limits屬性,以及每個容器都爲內存資源設置了具有相同值的requests和limits屬性的pod資源會字段歸屬此類別,這類Pod資源具有最高優先級。
  • Burstable:至少有一個容器設置了CPU或內存資源的requests屬性,但不滿足Guaranteed類別要求的Pod資源自動歸屬此類別,它們具有中等優先級。
  • BestEffort:沒有爲任何一個容器設置requests或limits屬性的Pod資源自動歸屬此類別,它們的優先級爲最低級別。

[root@k8smaster chapter4]# kubectl get pods
NAME                                READY   STATUS             RESTARTS   AGE
memleak-pod                         0/1     CrashLoopBackOff   7          13m
myapp-deployment-558f94fb55-plk4v   1/1     Running            3          39d
myapp-deployment-558f94fb55-rd8f5   1/1     Running            3          39d
myapp-deployment-558f94fb55-zzmpg   1/1     Running            3          39d
mypod                               1/1     Running            2          6d
nginx-deployment-6f77f65499-8g24d   1/1     Running            3          39d
pod-demo                            2/2     Running            4          6d1h
readiness-exec                      1/1     Running            1          3d23h
[root@k8smaster chapter4]# kubectl describe pod memleak-pod  | grep QoS
QoS Class:       Guaranteed
[root@k8smaster chapter4]# kubectl describe pod mypod  | grep QoS
QoS Class:       BestEffort
總結:
apiVersion,kind,metadata,spec,status(只讀)        #必要字段。

spec:                                                  #spec中的內嵌字段。
    containers
    nodeSelector
    nodeName
    restartPolicy                                       #重啓策略。
        Always,Never,OnFailure
    containers:
        name
        image
        imagePullPolicy: Alwasy,Never,IfNotPresent      #拉取鏡像策略。
        ports:
            name
            containerPort
        livenessProbe
       readinessProbe
       liftcycle
    ExecAction: exec
    TCPSocketAction: tcpSocket
    HTTPGetAction: httpGet
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章