Helm
爲什麼需要helm
在沒使用helm之前,向kubernetes
部署應用,我們要依次部署deployment
,service
,configMap
等,步驟較繁瑣。況且隨着很多項目微服務化,複雜的應用在容器中部署以及管理顯得較爲複雜.
helm
通過打包的方式,支持發佈的版本管理和控制,很大程度上簡化了Kubernetes
應用的部署和管理
helm幾個概念
Helm是官方提供的類似於Centos系統中YUM的包管理器,是部署環境的流程封裝。 Helm有幾個重要的概念:chart、release、repository。
- chart:是創建一個應用的信息集合,包括各種k8s對象的配置模板、參數定義、依賴關係、文檔說明等。chart是應用部署的自包含邏輯單元。可以將chart想象成apt、yum中的軟件安裝包;
- release:是chart的運行實例,代表了一個正在運行的應用。當chart被安裝到k8s集羣,就生成一個release。chart能夠多次安裝到同一個集羣,每次安裝都是一個release。
- repository:用於發佈和存儲chart的倉庫
Helm包含兩個組件:Helm客戶端和Tiller服務器 。如下圖所示:
Helm客戶端負責chart和release的創建和管理以及和Tiller的交互。Tiller服務器運行在k8s集羣中,它會處理Helm客戶端的請求,與k8s API Server進行交互。
helm用途
做爲Kubernetes
的一個包管理工具,Helm具有如下功能:
-
創建新的
chart
-
chart
打包成tgz
格式 -
上傳
chart
到chart
倉庫或從倉庫中下載 chart- 官方
chart
倉庫是: https://hub.helm.sh
- 官方
-
在
Kubernetes
集羣中安裝或卸載chart
-
用
Helm
管理安裝的chart
的發佈週期
helm安裝
在此安裝 2.16.12
版本
# 下載
$ wget https://get.helm.sh/helm-v2.16.12-linux-amd64.tar.gz
# 解壓
$ tar zxvf helm-v2.16.12-linux-amd64.tar.gz
# 將helm加入系統環境
$ cd linux-amd64/
$ cp helm /usr/local/bin/
爲了安裝服務端Tiller,還需要在這臺機器上配置好kubectl工具和kubeconfig文件,確保kubectl工具可以在這臺機器上訪問 API Server 且正常使用。這裏的node1節點已經配置好了kubectl。
因爲 k8s API Server開啓了RBAC訪問控制,所以需要創建Tiller使用的 service account: tiller並分配合適的角色給它。詳細內容可以查看helm文檔中的 Role-base Account Control 。這裏簡單起見直接分配 cluster-admin 這個集羣內置的 ClusterRole 給它。創建rbac-config yaml文件:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
創建:
$ kubectl create -f rbac-config.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
將Tiller部署到k8s集羣中:
$ helm init --service-account tiller --skip-refresh
$ kubectl get pod -n kube-system
tiller-deploy-565984b594-vtr9h 1/1 Running 0 17m
$ helm version
Client: &version.Version{SemVer:"v2.16.12", GitCommit:"47f0b88409e71fd9ca272abc7cd762a56a1c613e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.12", GitCommit:"47f0b88409e71fd9ca272abc7cd762a56a1c613e", GitTreeState:"clean"}
官方helm倉庫地址:https://artifacthub.io/
Helm自定義模板
# 創建文件夾
$ mkdir hello-world
$ cd hello-world
# 創建自描述文件 Chart.yaml,這個文件必須有name 和version定義
$ cat << 'EOF' > Chart.yaml
name: hello-world
version: 1.0.0
EOF
# 創建模板文件,用於生產k8s資源清單(mainfests)
$ mkdir templates
$ cat << 'EOF' > templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: hub.adaixuezhang.cn/library/myapp:v1
ports:
- containerPort: 80
protocol: TCP
EOF
$ cat << 'EOF' > templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: hello-world
EOF
創建release:
# 命令 helm install RELATIVE_PATH_TO_CHART 創建一次release
$ helm install .
helm命令使用說明
$ helm --help
使用helm編輯pod
# 配置體現在配置文件 values.yaml
$ cat << 'EOF' > templates/values.yaml
image:
repository: hub.adaixuezhang.cn/library/myapp
tag: 'v1'
EOF
# 這個文件中定義的值,在模板文件中可以通過 .Values對象訪問到
$ cat << 'EOF' > templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
- containerPort: 80
protocol: TCP
EOF
# 在values.yaml 中的值可以被部署 release 時用到的參數 --values YAML_FILE_PATH 或 --set key1=value1, key2=value2覆蓋掉
$ helm install --set image.tag='v2'
$ helm upgrade -f values.yaml test .
Debug
# 使用模板動態生成k8s資源清單,非常需要能提前預覽生成的結果
# 使用 --dry-run --debug 選項來打印出生成的清單文件內容,而不執行部署
$ helm install . --dry-run --debug --set image.tag='v2'
功能性組件-dashboard
使用Helm部署dashboard
準備:
# 預先準備helm模板文件
$ helm fetch stable/kubernetes-dashboard
$ tar zxvf kubernetes-dashboard-1.11.1.tgz
$ cd kubernetes-dashboard/
$ ls
Chart.yaml README.md templates values.yaml
# 準備 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 ,需要到可以訪問外網的機器下載鏡像
創建 kubernetes-dashboard.yaml:
image:
repository: k8s.gcr.io/kubernetes-dashboard-amd64
tag: v1.10.1
ingress:
enabled: true
hosts:
- k8s.frognew.com
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
tls:
- secretName: frognew-com-tls-secret
hosts:
- k8s.frognew.com
rbac:
clusterAdminRole: true
部署:
$ helm install . -n kubernetes-dashboard \
--namespace kube-system \
-f kubernetes-dashboard.yaml
查看:
$ kubectl get svc -n kube-system
kubernetes-dashboard ClusterIP 10.98.209.153 <none> 443/TCP 7m46s
# 將svc的type改爲 NodePort 以便於訪問
$ kubectl -n kube-system edit svc kubernetes-dashboard
spec:
...
type: NodePort
$ kubectl get svc -n kube-system
kubernetes-dashboard NodePort 10.98.209.153 <none> 443:31128/TCP 10m
訪問dashboard:https://host1:31128
開始配置dashboard。
選擇通過令牌方式登陸:
獲取登陸用的 kubernetes-dashboard-token:
$ kubectl -n kube-system get secret |grep kubernetes-dashboard-token
kubernetes-dashboard-token-zk5h5 kubernetes.io/service-account-token 3 14m
$ kubectl -n kube-system describe secret kubernetes-dashboard-token-zk5h5
Name: kubernetes-dashboard-token-zk5h5
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard
kubernetes.io/service-account.uid: 67e98daf-5ad2-48bb-99cb-3645bfe47782
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6Imtmck13OU5WbUhwSXJhX3RrNkZHWk1sTjI4T0pfeWVEOGJLN0tqa1p1U2cifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi16azVoNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjY3ZTk4ZGFmLTVhZDItNDhiYi05OWNiLTM2NDViZmU0Nzc4MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.Va3Cv5Px0M1CJvcj6u2ssr--2vNNMqKlHD3o0KZx7ZQs4QOt8HOC0_fn93s7N7qAMUGil0Oi9oOXG08EWH6sHX4V2w0HsYNTseUgjXmcxxPpoVzBZCTMeWd7GNGBSaH3DlVV_pVSnuWSpyIqGwiOC1CUJuufVNp1GLaUuk5J4CqniR-1Jtu2_Qab0wWVexJoK6hJQ-c1cvGPXIaeLNp09PMMgi-CdOrgqCdWAQhD3O-VHaGZzMRKDfwMal-IZ0ZE7xTGhmwHTjWI67tcoJxAQWwLY3vmH52QaBv_kPSsBBq73wAf0-T8Y1PA1x4MsgFUGBs8pgzvhr7giC8eKh4tsA
將token填入令牌認證位置即可!
異常處理
在使用token令牌進行登陸時報錯404,檢查日誌發現如下錯誤信息:
2020/10/01 12:16:28 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
問題分析:Heapster是容器集羣監控和性能分析工具,HPA、Dashborad、Kubectl top都依賴於heapster收集的數據。但是Heapster從kubernetes 1.8以後已經被遺棄了,被metrics-server所替代…
解決辦法 :安裝 heapster。
- heapster.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: heapster
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: heapster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:heapster
subjects:
- kind: ServiceAccount
name: heapster
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: heapster
template:
metadata:
labels:
task: monitoring
k8s-app: heapster
spec:
serviceAccountName: heapster
containers:
- name: heapster
# image: k8s.gcr.io/heapster-amd64:v1.5.4 將默認google的官方鏡像替換爲阿里雲鏡像,否則你懂得
image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-amd64:v1.5.4
command:
- /heapster
- --source=kubernetes:https://kubernetes.default?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an add-on, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
-
vim heapster-clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:heapster rules: - apiGroups: - "" resources: - events - namespaces - nodes - pods - nodes/stats verbs: - create - get - list - watch - apiGroups: - extensions resources: - deployments verbs: - get - list - watch
參考 。
功能性組件之metrics-server、Prometheus、資源限制
組件說明
安裝
# clone 項目
$ git clone https://github.com/prometheus-operator/kube-prometheus.git
修改grafana-service.yaml文件,使用 NodePort 方式訪問 Grafana:
$ cd kube-prometheus/manifests
$ vim grafana-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: grafana
name: grafana
namespace: monitoring
spec:
type: NodePort # 添加內容
ports:
- name: http
port: 3000
targetPort: http
nodePort: 30100 # 添加內容
selector:
app: grafana
修改prometheus-service.yaml文件,使用 NodePort 方式訪問 prometheus:
$ vim prometheus-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
prometheus: k8s
name: prometheus-k8s
namespace: monitoring
spec:
type: NodePort
ports:
- name: web
port: 9090
targetPort: web
nodePort: 30200
selector:
app: prometheus
prometheus: k8s
sessionAffinity: ClientIP
修改alertmanager-service.yaml文件,使用 NodePort 方式訪問 alertmanager:
$ vim alertmanager-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
alertmanager: main
name: alertmanager-main
namespace: monitoring
spec:
type: NodePort
ports:
- name: web
port: 9093
targetPort: web
nodePort: 30300
selector:
alertmanager: main
app: alertmanager
sessionAffinity: ClientIP
部署:
# 部署
$ kubectl apply -f manifests/setup/
$ kubectl apply -f manifests/
# 查看部署詳情
$ kubectl -n monitoring get pods
# 說明:有些pod因無法拉取鏡像會創建失敗,需要手動下載並導入相應的鏡像(可以通過 kubectl -n monitoring describe pods Pod_Name 查看)
# 測試是否部署成功
$ kubectl top pods -n kube-system
訪問Prometheus
地址:http://host1:30200
查看targets信息:
Prometheus的web界面提供了基本的查詢k8s機器中每個Pod的CPU使用情況,查詢條件如下:
sum by (pod_name)( rate(container_cpu_usage_seconds_total{image!="", pod_name!=""}[1m] ) )
訪問Grafana
地址:http://host1:30100
默認賬號密碼:admin,admin。更改密碼爲 admin12345。
首先接入數據源和Dashboard模板: 數據源選擇Prometheus,參數保持默認即可
HPA
Horizontal Pod Autoscaling 可以根據CPU利用率自動伸縮一個Replication Controller、Deployment或者Replica Set中的Pod數量。
功能性組件—EFK
使用helm部署EFK(Elasticsearch、Fluentd、Kibana)。
添加Google incubator 倉庫
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
部署Elasticsearch
$ kubectl create namespace efk
$ helm fetch incubator/elasticsearch
$ helm install --name els1 --namespace efk -f values.yaml
$ kubectl run cirror-$RANDOM --rm -it --image=cirros -- /bin/sh
$ curl Elasticsearch:Port/_cat/nodes
部署Fluentd
$ helm fetch stable/fluentd-elasticsearch
$ vim valules.yaml
# 更改其中 Elasticsearch 訪問地址
$ helm install --name flu1 --namespace=efk -f values.yaml
部署Kibana
$ helm fetch stable/kibana --version 0.14.8 # 保證Kibana和es版本一致
$ vim valules.yaml
# 更改其中 Elasticsearch 訪問地址
$ helm install --name kib1 --namespace efk -f values.yaml