k8s版本:kubeadm v1.13.4
metrics-server
從Kubernetes 1.8開始,Kubernetes通過Metrics API提供資源使用指標,例如容器CPU和內存使用。這些度量可以由用戶直接訪問,例如通過使用kubectl top
命令,或者由羣集中的控制器(例如Horizontal Pod Autoscaler)使用來進行決策。
部署文件:
auth-delegator.yaml metrics-apiservice.yaml metrics-server-service.yaml
auth-reader.yaml metrics-server-deployment.yaml resource-reader.yaml
for i in auth-delegator.yaml auth-reader.yaml metrics-apiservice.yaml metrics-server-deployment.yaml metrics-server-service.yaml resource-reader.yaml ;do wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/metrics-server/$i;done
直接下載下來的文件需要進行修改使用
auth-delegator.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-server:system:auth-delegator labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system
auth-reader.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: metrics-server-auth-reader namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system
metrics-apiservice.yaml
apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.metrics.k8s.io labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: service: name: metrics-server namespace: kube-system group: metrics.k8s.io version: v1beta1 insecureSkipTLSVerify: true groupPriorityMinimum: 100 versionPriority: 100
metrics-server-deployment.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: metrics-server namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile --- apiVersion: v1 kind: ConfigMap metadata: name: metrics-server-config namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: EnsureExists data: NannyConfiguration: |- apiVersion: nannyconfig/v1alpha1 kind: NannyConfiguration --- apiVersion: apps/v1 kind: Deployment metadata: name: metrics-server-v0.3.1 namespace: kube-system labels: k8s-app: metrics-server kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile version: v0.3.1 spec: selector: matchLabels: k8s-app: metrics-server version: v0.3.1 template: metadata: name: metrics-server labels: k8s-app: metrics-server version: v0.3.1 annotations: scheduler.alpha.kubernetes.io/critical-pod: '' seccomp.security.alpha.kubernetes.io/pod: 'docker/default' spec: priorityClassName: system-cluster-critical serviceAccountName: metrics-server containers: - name: metrics-server image: k8s.gcr.io/metrics-server-amd64:v0.3.1 command: - /metrics-server - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP ports: - containerPort: 443 name: https protocol: TCP - name: metrics-server-nanny image: k8s.gcr.io/addon-resizer:1.8.4 resources: limits: cpu: 100m memory: 300Mi requests: cpu: 5m memory: 50Mi env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: metrics-server-config-volume mountPath: /etc/config command: - /pod_nanny - --config-dir=/etc/config #- --cpu={{ base_metrics_server_cpu }} - --cpu=20m - --extra-cpu=0.5m #- --memory={{ base_metrics_server_memory }} #- --extra-memory={{ metrics_server_memory_per_node }}Mi - --memory=50Mi - --extra-memory=5Mi - --threshold=5 - --deployment=metrics-server-v0.3.1 - --container=metrics-server - --poll-period=300000 - --estimator=exponential # Specifies the smallest cluster (defined in number of nodes) # resources will be scaled to. #- --minClusterSize={{ metrics_server_min_cluster_size }} - --minClusterSize=1 volumes: - name: metrics-server-config-volume configMap: name: metrics-server-config tolerations: - key: "CriticalAddonsOnly" operator: "Exists"
10250是https端口,連接它時需要提供證書,所以加上--kubelet-insecure-tls,表示不驗證客戶端證書,此前的版本中使用--source=這個參數來指定不驗證客戶端證書。
metrics-server這個容器不能通過CoreDNS 10.96.0.10:53 解析各Node的主機名,metrics-server連節點時默認是連接節點的主機名,需要加個參數,讓它連接節點的IP:“--kubelet-preferred-address-types=InternalIP”
metrics-server-service.yaml
apiVersion: v1 kind: Service metadata: name: metrics-server namespace: kube-system labels: addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/cluster-service: "true" kubernetes.io/name: "Metrics-server" spec: selector: k8s-app: metrics-server ports: - port: 443 protocol: TCP targetPort: https
resource-reader.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:metrics-server labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces verbs: - get - list - watch - apiGroups: - "extensions" resources: - deployments verbs: - get - list - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:metrics-server labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system
部署完成後執行
kubectl get pod -n kube-system查看是否部署成功
kubectl top pod/node
如果出現如下報錯,請嘗試在pod所在節點重啓kubelet
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
HPA
Horizontal Pod Autoscaler根據觀察到的CPU利用率自動調整複製控制器,部署或副本集中的pod數量(或者,使用自定義度量標準支持,根據其他一些應用程序提供的度量標準)。請注意,Horizontal Pod Autoscaling不適用於無法縮放的對象,例如DaemonSet。
如果某些pod的容器沒有設置相關的資源請求,則不會定義pod的CPU利用率,並且autoscaler不會對該度量標準採取任何操作。
算法:desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]
cat nginx-deploy-hpa.yaml
apiVersion: apps/v1beta1 kind: Deployment metadata: name: nginx-deploy-hpa spec: replicas: 2 template: metadata: labels: app: nginx-hpa spec: containers: - name: nginx-hpa image: nginx:1.8 ports: - containerPort: 80 resources: requests: cpu: 30m memory: 30Mi limits: cpu: 30m memory: 30Mi
創建HPA
命令創建
kubectl autoscale deploy nginx-deploy-hpa --min=2 --max=10 --cpu-precent=30
文件創建
vi nginx-hpa.yaml
apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: nginx-deploy-hpa namespace: default spec: maxReplicas: 10 minReplicas: 2 scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: nginx-deploy-hpa targetCPUUtilizationPercentage: 30
對pod進行壓測,可以看出pod數量自動增長
結束壓測,過一段時間,pod數量會自動減少
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server
https://kubernetes.io/docs/tasks/debug-application-cluster/core-metrics-pipeline
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/