Kubernetes 学习总结(4) Controller

Pod分为自主式pod和受controller管理的pod。学习总结(3)中所创建的pod皆为自主式pod,本篇总结受controller管理的pod。本篇中所涉及到的controller未加特殊说明时都表示pod Controller。
Pod controller
pod controller 是管理pod的中间层,保障pod 资源的状态与用户的期望状态保持一致。
Kubernetes 学习总结(4) Controller
pod controller 有多种:
1) ReplicaSet (代替用户创建管理无状态的pod副本,并保证副本数据与用户期望一致, 实现扩、缩容等操作), 它也是新一代的ReplicationController;kubernetes 不建议直接使用replicaSet,而建议使用Deployment,它工作在replicaSet之上,它不直接控制pod,而是直接控制replicaSet,进而控制 pod。它还支持滚动更新、回滚等,提供了声明式配置的功能 。
2) DaemonSet : 用于确保cluster中的每一个node,只运行一个特定的pod副本。实现一些OS级的守护任务。也可以在cluster的部分节点上,每个node 只运行一个特定的pod副本。
3) Deployment、DaemonSet 都是无状态的,都有守护进程,常驻内存的。
4) Job控制器:保障任务只执行一次。
5) cronJob控制器:周期运行某一任务。
6) StatefullSet: 管理有状态应用。
node 与 pod 之间没有一一对应的关系。
controller 配置清单中的核心资源有: 1、用户期望的副本数; 2、标签选择器;3、pod资源模板
例1-ReplicaSet:

[root@docker79 manifests]# mkdir controller
[root@docker79 manifests]# cd controller/
[root@docker79 controller]# vim rs-demo.yaml
[root@docker79 controller]# cat rs-demo.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx-rs
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
      release: canary
  template:
    metadata:
      name: nginx-pod
      labels:
        app: nginx
        release: canary
        environment: qa
    spec:
      containers:
      - name: nginx-container
        image: nginx:1.14-alpine
        ports:
        - name: http
          containerPort: 80
[root@docker79 controller]# kubectl apply -f rs-demo.yaml
replicaset.apps/nginx-rs created
[root@docker79 controller]# kubectl get pods -o wide
NAME             READY     STATUS    RESTARTS   AGE       IP            NODE       NOMINATED NODE
nginx-rs-hncc9   1/1       Running   0          1m        10.244.1.17   docker78   <none>
nginx-rs-wnxnx   1/1       Running   0          1m        10.244.2.11   docker77   <none>
[root@docker79 controller]# curl -I http://10.244.1.17
HTTP/1.1 200 OK
Server: nginx/1.14.0
Date: Thu, 27 Sep 2018 02:16:14 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Wed, 12 Sep 2018 00:04:31 GMT
Connection: keep-alive
ETag: "5b98580f-264"
Accept-Ranges: bytes
[root@docker79 controller]#

[root@docker79 controller]# kubectl get rs
NAME       DESIRED   CURRENT   READY     AGE
nginx-rs   2         2         2         2m
[root@docker79 controller]# kubectl edit rs nginx-rs
replicaset.extensions/nginx-rs edited  
[root@docker79 controller]#

说明:
将image中的版本改为nginx:1.15-alpine并保存退出,然后container 不会自动升为1.15-alpine,需要手工删除旧container时,contrller自动创建的新container就会升级为1.15-alpine了。这种删除一个pod,然后k8s自动再创建一个新版本的pod,称为canary的滚动升级。

 接上
 [root@docker79 controller]# kubectl delete pod nginx-rs-hncc9
pod "nginx-rs-hncc9" deleted
[root@docker79 controller]# kubectl get pods -o wide
NAME             READY     STATUS    RESTARTS   AGE       IP            NODE       NOMINATED NODE
nginx-rs-6tnz5   1/1       Running   0          22s       10.244.1.18   docker78   <none>
nginx-rs-wnxnx   1/1       Running   0          11m       10.244.2.11   docker77   <none>
[root@docker79 controller]# curl -I 10.244.1.18
HTTP/1.1 200 OK
Server: nginx/1.15.4
Date: Thu, 27 Sep 2018 02:27:18 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 25 Sep 2018 17:23:56 GMT
Connection: keep-alive
ETag: "5baa6f2c-264"
Accept-Ranges: bytes

[root@docker79 controller]#
[root@docker79 controller]# kubectl delete -f rs-demo.yaml
replicaset.apps "nginx-rs" deleted

Kubernetes 学习总结(4) Controller

例2-Deployment:

 [root@docker79 controller]# vim deploy-demo.yaml
[root@docker79 controller]# cat deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deploy
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
      release: canary
  template:
    metadata:
      labels:
        app: nginx
        release: canary
    spec:
      containers:
      - name: nginx-container
        image: nginx:1.14-alpine
        ports:
        - name: http
          containerPort: 80
[root@docker79 controller]# kubectl apply -f deploy-demo.yaml
deployment.apps/nginx-deploy created
[root@docker79 controller]# kubectl get pods -o wide
NAME                            READY     STATUS    RESTARTS   AGE       IP            NODE       NOMINATED NODE
nginx-deploy-688454bc5d-ft8wm   1/1       Running   0          17s       10.244.1.19   docker78   <none>
nginx-deploy-688454bc5d-pnmpn   1/1       Running   0          17s       10.244.2.12   docker77   <none>
[root@docker79 controller]# kubectl get rs
NAME                      DESIRED   CURRENT   READY     AGE
nginx-deploy-688454bc5d   2         2         2         35s
[root@docker79 controller]# kubectl get deploy
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   2         2         2            2           46s
[root@docker79 controller]#
说明:rs name规律为 deployment name + template name的hash值 ; pods name规律为 rs name + hash值
[root@docker79 controller]# vim deploy-demo.yaml
[root@docker79 controller]# grep replicas deploy-demo.yaml
  replicas: 3
[root@docker79 controller]# kubectl apply -f deploy-demo.yaml
deployment.apps/nginx-deploy configured
[root@docker79 controller]# kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
nginx-deploy-688454bc5d-ft8wm   1/1       Running   0          5m
nginx-deploy-688454bc5d-pnmpn   1/1       Running   0          5m
nginx-deploy-688454bc5d-q24pr   1/1       Running   0          7s
[root@docker79 controller]# 说明:把 deploy-demo.yaml 中的 relicas 改为3 ,然后再次 kubectl  apply -f deploy-demo.yaml
[root@docker79 controller]# kubectl describe deploy nginx-deploy
......
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
......
说明:如果StrategyType为 rollingUpdate,就可以设置RollingUpdateStrategy字段,该字段值有maxSurge (超出多少,可以指定数量或百分比)、maxUnavailable (最多有多少个不可用,可以指定数值或百分比)

[root@docker79 controller]# vim deploy-demo.yaml
[root@docker79 controller]# grep image deploy-demo.yaml
        image: nginx:1.15-alpine
[root@docker79 controller]# kubectl apply -f deploy-demo.yaml
deployment.apps/nginx-deploy configured
[root@docker79 controller]#
[root@docker79 controller]# kubectl get pods -o wide
NAME                            READY     STATUS    RESTARTS   AGE       IP            NODE       NOMINATED NODE
nginx-deploy-7488bbd64f-7vnps   1/1       Running   0          3m        10.244.1.22   docker78   <none>
nginx-deploy-7488bbd64f-nxlc2   1/1       Running   0          4m        10.244.1.21   docker78   <none>
nginx-deploy-7488bbd64f-v5qws   1/1       Running   0          4m        10.244.2.13   docker77   <none>
[root@docker79 controller]# curl -I http://10.244.1.22
HTTP/1.1 200 OK
Server: nginx/1.15.4
Date: Thu, 27 Sep 2018 02:53:28 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 25 Sep 2018 17:23:56 GMT
Connection: keep-alive
ETag: "5baa6f2c-264"
Accept-Ranges: bytes
[root@docker79 controller]#
说明:把deploy-demo.yaml 中的  image 版本改变为1.15-alpine,然后再 kubectl apply -f deploy-demo.yaml,然后使用kubectl  get pods  -l  app=nginx  -w   查看更新的过程
[root@docker79 controller]# kubectl get rs -o wide
NAME                      DESIRED   CURRENT   READY     AGE       CONTAINERS        IMAGES              SELECTOR
nginx-deploy-688454bc5d   0         0         0         15m       nginx-container   nginx:1.14-alpine   app=nginx,pod-template-hash=2440106718,release=canary
nginx-deploy-7488bbd64f   3         3         3         5m        nginx-container   nginx:1.15-alpine   app=nginx,pod-template-hash=3044668209,release=canary
[root@docker79 controller]#
[root@docker79 controller]# kubectl rollout history deployment/nginx-deploy
deployments "nginx-deploy"
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

[root@docker79 controller]# kubectl rollout status deployment/nginx-deploy
deployment "nginx-deploy" successfully rolled out
[root@docker79 controller]# kubectl rollout undo deployment/nginx-deploy
deployment.extensions/nginx-deploy
[root@docker79 controller]# kubectl get rs -o wide
NAME                      DESIRED   CURRENT   READY     AGE       CONTAINERS        IMAGES              SELECTOR
nginx-deploy-688454bc5d   3         3         3         18m       nginx-container   nginx:1.14-alpine   app=nginx,pod-template-hash=2440106718,release=canary
nginx-deploy-7488bbd64f   0         0         0         8m        nginx-container   nginx:1.15-alpine   app=nginx,pod-template-hash=3044668209,release=canary
[root@docker79 controller]# kubectl get pods -o wide
NAME                            READY     STATUS    RESTARTS   AGE       IP            NODE       NOMINATED NODE
nginx-deploy-688454bc5d-7nvvm   1/1       Running   0          23s       10.244.2.14   docker77   <none>
nginx-deploy-688454bc5d-jts2x   1/1       Running   0          22s       10.244.1.24   docker78   <none>
nginx-deploy-688454bc5d-slkn7   1/1       Running   0          24s       10.244.1.23   docker78   <none>
[root@docker79 controller]# curl -I 10.244.1.24
HTTP/1.1 200 OK
Server: nginx/1.14.0
Date: Thu, 27 Sep 2018 02:58:26 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Wed, 12 Sep 2018 00:04:31 GMT
Connection: keep-alive
ETag: "5b98580f-264"
Accept-Ranges: bytes
[root@docker79 controller]#
说明:
kubect get rs -o wide 查看各版本的现状
kubectl   rollout  history 查看回滚历史
kubectl   rollout status 查看回滚状态
kubectl rollout undo 回滚到前一个版本,也可以使用--to-reversion=1 参考回滚到第一个版本
kubectl  rollout  pause 暂停回滚
kubectl rollout resume 恢复回滚
还可以指定 deploy.spec下的 revisionHistoryLimit ,表示滚动升级时最多保存的旧版本数量
[root@docker79 controller]# kubectl patch deployment nginx-deploy -p '{"spec":{"replicas":4}}'
deployment.extensions/nginx-deploy patched
[root@docker79 controller]# kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
nginx-deploy-688454bc5d-7nvvm   1/1       Running   0          7m
nginx-deploy-688454bc5d-bc94s   1/1       Running   0          11s
nginx-deploy-688454bc5d-jts2x   1/1       Running   0          7m
nginx-deploy-688454bc5d-slkn7   1/1       Running   0          8m
[root@docker79 controller]# kubectl delete -f deploy-demo.yaml
deployment.apps "nginx-deploy" deleted
[root@docker79 controller]#

例3-DaemonSet

[root@docker79 controller]# vim ds-demo.yaml
[root@docker79 controller]# cat ds-demo.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat-ds
  namespace: default
spec:
  selector:
    matchLabels:
      app: filebeat
      release: stable
  template:
    metadata:
      labels:
        app: filebeat
        release: stable
    spec:
      containers:
      - name: filebeat
        image: ikubernetes/filebeat:5.6.5-alpine
        imagePullPolicy: IfNotPresent
        env:
        - name: REDIS_HOST
          value: redis.default.svc.cluster.local
        - name: REDIS_LOG_LEVEL
          value: info
[root@docker79 controller]# kubectl apply -f ds-demo.yaml
daemonset.apps/filebeat-ds created
[root@docker79 controller]# kubectl get pods -o wide
NAME                READY     STATUS    RESTARTS   AGE       IP            NODE       NOMINATED NODE
filebeat-ds-mzff4   1/1       Running   0          29s       10.244.1.25   docker78   <none>
filebeat-ds-zzl8q   1/1       Running   0          29s       10.244.2.16   docker77   <none>
[root@docker79 controller]#

说明:DaemonSet 默认会在每个非master 节点各运行一个。

例4-Service:
由于pod有生命周期,所以在client与pod之间使用了service层,service依赖CoreDNS、kube-dns。
kube-proxy 始终监控着API server上有关 service 的变动信息,此过程称为watch 。
service的实现方式有三种,userspace、iptables、ipvs,原理如下图所示:
Kubernetes 学习总结(4) Controller

Kubernetes 学习总结(4) Controller

Kubernetes 学习总结(4) Controller

[root@docker79 controller]# vim svc-demo.yaml
[root@docker79 controller]# cat svc-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-deploy
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: logstor
  template:
    metadata:
      labels:
        app: redis
        role: logstor
    spec:
      containers:
      - name: redis-container
        image: redis:4.0-alpine
        ports:
        - name: redis
          containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: redis-svc
  namespace: default
spec:
  selector:
    app: redis
    role: logstor
  clusterIP: 10.97.97.97
  type: ClusterIP
  ports:
  - name: redis-port
    port: 6379
    targetPort: 6379
[root@docker79 controller]# kubectl apply -f svc-demo.yaml
deployment.apps/redis-deploy created
service/redis-svc created
[root@docker79 controller]# kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
filebeat-ds-mzff4               1/1       Running   0          16m
filebeat-ds-zzl8q               1/1       Running   0          16m
redis-deploy-7587b96c74-ddbv8   1/1       Running   0          10s
[root@docker79 controller]#
[root@docker79 controller]# kubectl get svc
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP    3d
redis-svc    ClusterIP   10.97.97.97   <none>        6379/TCP   38s
[root@docker79 controller]# kubectl describe svc redis-svc
Name:              redis-svc
Namespace:         default
Labels:            <none>
Annotations:       kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"redis-svc","namespace":"default"},"spec":{"clusterIP":"10.97.97.97","ports":[{...
Selector:          app=redis,role=logstor
Type:              ClusterIP
IP:                10.97.97.97
Port:              redis-port  6379/TCP
TargetPort:        6379/TCP
Endpoints:         10.244.1.27:6379
Session Affinity:  None
Events:            <none>
[root@docker79 controller]#

Service Type说明:
clusterIP 只能内部使用。内部资源记录的格式为:SVC_NAME.NS_NAME.DOMAIN.LTD。默认后缀为 svc.cluster.local.
NodePort 可提供外部访问,访问路径 client request --> NodeIP:nodePort --> ClusterIP:ServicePort-->PodIP:containerPort
ExternalName 把集群外部的一个服务映射到集群内部使用。它通常是一个FQDN的CNAME名称 ,并指向另一个外部的FQDN名称。可以达到 让pods访问外部服务的目的。
No ClusterIP: 称为 Headless Service。它可以把servicename名称直接解析到podIP,所以无需clusterIP。

[root@docker79 controller]# vim svc-demo2.yaml
[root@docker79 controller]# vim deploy-demo.yaml
[root@docker79 controller]# kubectl apply -f deploy-demo.yaml
deployment.apps/nginx-deploy created
[root@docker79 controller]# cat svc-demo2.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  namespace: default
spec:
  selector:
    app: nginx
    release: canary
  clusterIP: 10.99.99.99
  type: NodePort
  ports:
  - name: httpport
    port: 80
    targetPort: 80
    nodePort: 30080
[root@docker79 controller]# kubectl apply -f svc-demo2.yaml
service/nginx-svc created
[root@docker79 controller]#
[root@docker79 controller]# kubectl get pod -l release=canary
NAME                            READY     STATUS    RESTARTS   AGE
nginx-deploy-7488bbd64f-7bp5k   1/1       Running   0          1m
nginx-deploy-7488bbd64f-ndqx9   1/1       Running   0          1m
[root@docker79 controller]# kubectl get svc
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP        3d
nginx-svc    NodePort    10.99.99.99   <none>        80:30080/TCP   26s
redis-svc    ClusterIP   10.97.97.97   <none>        6379/TCP       12m
[root@docker79 controller]#
可以在集群外的任何一台host上 访问
yuandeMacBook-Pro:~ yuanjicai$ curl -I http://192.168.20.79:30080
HTTP/1.1 200 OK
Server: nginx/1.15.4
Date: Thu, 27 Sep 2018 04:01:11 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 25 Sep 2018 17:23:56 GMT
Connection: keep-alive
ETag: "5baa6f2c-264"
Accept-Ranges: bytes

[root@docker79 controller]# kubectl delete -f svc-demo2.yaml
service "nginx-svc" deleted

说明:(可以通过以下命令设置sessionAffinity选项)
kubectl patch svc nginx-svc -p '{"spec":{"sessionAffinity":"ClientIP"}}
kubectl describe svc nginx-svc
kubectl patch svc nginx-svc -p '{"spec":{"sessionAffinity":"None"}}

[root@docker79 controller]# vim svc-headless-demo.yaml
[root@docker79 controller]# cat svc-headless-demo.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  namespace: default
spec:
  selector:
    app: nginx
    release: canary
  clusterIP: None
  ports:
  - name: httpport
    port: 80
    targetPort: 80
[root@docker79 controller]# kubectl apply -f svc-headless-demo.yaml
service/nginx-svc created
[root@docker79 controller]# kubectl describe svc nginx-svc
Name:              nginx-svc
Namespace:         default
Labels:            <none>
Annotations:       kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"nginx-svc","namespace":"default"},"spec":{"clusterIP":"None","ports":[{"name":...
Selector:          app=nginx,release=canary
Type:              ClusterIP
IP:                None
Port:              httpport  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.28:80,10.244.2.17:80
Session Affinity:  None
Events:            <none>
[root@docker79 controller]#

[root@docker79 controller]# dig -t  A  nginx-svc.default.svc.cluster.local.  @10.96.0.10

; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7_5.1 <<>> -t A nginx-svc.default.svc.cluster.local. @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55045
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;nginx-svc.default.svc.cluster.local. IN    A

;; ANSWER SECTION:
nginx-svc.default.svc.cluster.local. 5 IN A 10.244.1.28
nginx-svc.default.svc.cluster.local. 5 IN A 10.244.2.17

;; Query time: 1 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: 四 9月 27 12:09:25 CST 2018
;; MSG SIZE  rcvd: 166

[root@docker79 controller]#

说明:No ClusterIP: 称为 Headless Service。它可以把servicename名称直接解析到podIP,所以无需clusterIP。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章