helm 部署和簡單使用 頂 原 薦

微服務和容器化給複雜應用部署與管理帶來了極大的挑戰。Helm是目前Kubernetes服務編排領域的唯一開源子項目,做爲Kubernetes應用的一個包管理工具,可理解爲Kubernetes的apt-get / yum,由Deis 公司發起,該公司已經被微軟收購。

Helm通過軟件打包的形式,支持發佈的版本管理和控制,很大程度上簡化了Kubernetes應用部署和管理的複雜性

隨着業務容器化與向微服務架構轉變,通過分解巨大的單體應用爲多個服務的方式,分解了單體應用的複雜性,使每個微服務都可以獨立部署和擴展,實現了敏捷開發和快速迭代和部署。但任何事情都有兩面性,雖然微服務給我們帶來了很多便利,但由於應用被拆分成多個組件,導致服務數量大幅增加,對於Kubernetest編排來說,每個組件有自己的資源文件,並且可以獨立的部署與伸縮,這給採用Kubernetes做應用編排帶來了諸多挑戰

  1. 管理、編輯與更新大量的K8s配置文件
  2. 部署一個含有大量配置文件的複雜K8s應用
  3. 分享和複用K8s配置和應用
  4. 參數化配置模板支持多個環境
  5. 管理應用的發佈:回滾、diff和查看發佈歷史
  6. 控制一個部署週期中的某一些環節
  7. 發佈後的驗證

而Helm恰好可以幫助我們解決上面問題。

Helm把Kubernetes資源(比如deployments、services或 ingress等) 打包到一個chart中,而chart被保存到chart倉庫。通過chart倉庫來存儲和分享chart。Helm使發佈可配置,支持發佈應用配置的版本管理,簡化了Kubernetes部署應用的版本控制、打包、發佈、刪除、更新等操作。

本文簡單介紹了Helm的用途、架構、安裝和使用。

用途

做爲Kubernetes的一個包管理工具,Helm具有如下功能:

  • 創建新的chart
  • chart打包成tgz格式
  • 上傳chart到chart倉庫或從倉庫中下載chart
  • 在Kubernetes集羣中安裝或卸載chart
  • 管理用Helm安裝的chart的發佈週期

Helm有三個重要概念:

  1. chart:包含了創建Kubernetes的一個應用實例的必要信息
  2. config:包含了應用發佈配置信息
  3. release:是一個chart及其配置的一個運行實例

架構

helm架構

組件

Helm有以下兩個組成部分:

  1. Helm Client是用戶命令行工具,其主要負責如下:
    • 本地chart開發
    • 倉庫管理
    • 與Tiller sever交互
    • 發送預安裝的chart
    • 查詢release信息
    • 要求升級或卸載已存在的release
  2. Tiller server是一個部署在Kubernetes集羣內部的server,其與Helm client、Kubernetes API server進行交互,主要負責如下:
    • 監聽來自Helm client的請求
    • 通過chart及其配置構建一次發佈
    • 安裝chart到Kubernetes集羣,並跟蹤隨後的發佈
    • 通過與Kubernetes交互升級或卸載chart

簡單的說,client管理charts,而server管理髮布release。

實現

  1. Helm client
    • Helm client採用go語言編寫,採用gRPC協議與Tiller server交互。
  2. Helm server
    • Tiller server也同樣採用go語言編寫,提供了gRPC server與helm client進行交互,利用Kubernetes client 庫與Kubernetes進行通信,當前庫使用了REST+JSON格式。
    • Tiller server 沒有自己的數據庫,目前使用Kubernetes的ConfigMaps存儲相關信息

說明:配置文件儘可能使用YAM格式

安裝

如果與我的情況不同,請閱讀官方的quick guide,了安裝的核心流程和多種情況。

Helm Release地址

前置條件

  1. kubernetes集羣
  2. 瞭解kubernetes的Context安全機制
  3. 下載helm的安裝包wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz

配置ServiceAccount和規則

我的環境使用了RBAC(Role-Based Access Control )的授權方式,需要先配置ServiceAccount和規則,然後再安裝helm。官方配置參考Role-based Access Control文檔

配置helm全集羣權限

權限管理yml:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

cluster-admin 是kubernetes默認創建的角色。不需要重新定義。

安裝helm:

$ kubectl create -f rbac-config.yaml
serviceaccount "tiller" created
clusterrolebinding "tiller" created
$ helm init --service-account tiller

運行結果:

$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

實驗環境建議使用本方式安裝,然後安裝ingress-nginx等系統組件。

配置helm在一個namespace,管理另一個namespace

配置helm 安裝在helm-system namespace,允許Tiller發佈應用到kube-public namespace。

創建Tiller安裝namespace 和 ServiceAccount

創建helm-system namespace,使用命令kubectl create namespace helm-system

定義ServiceAccount

---
kind: ServiceAccount
apiVersion: v1
metadata:
  name: tiller
  namespace: helm-system
Tiller管理namespace的角色和權限配置

創建一個Role,擁有namespace kube-public的所有權限。將Tiller的ServiceAccount綁定到這個角色上,允許Tiller 管理kube-public namespace 所有的資源。

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: tiller-manager
  namespace: kube-public
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["*"]
  verbs: ["*"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: tiller-binding
  namespace: kube-public
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: helm-system
roleRef:
  kind: Role
  name: tiller-manager
  apiGroup: rbac.authorization.k8s.io
Tiller內部的Release信息管理

Helm中的Release信息存儲在Tiller安裝的namespace中的ConfigMap,即helm-system,需要允許Tiller操作helm-systemConfigMap。所以創建Role helm-system.tiller-manager,並綁定到ServiceAccounthelm-system.tiller

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: helm-system
  name: tiller-manager
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["configmaps"]
  verbs: ["*"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: tiller-binding
  namespace: helm-system
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: helm-system
roleRef:
  kind: Role
  name: tiller-manager
  apiGroup: rbac.authorization.k8s.io
init helm
使用命令`helm init --service-account tiller --tiller-namespace helm-system`安裝helm。

helm init參數說明:

  • --service-account:指定helm Tiller的ServiceAccount,對於啓用了kubernetesRBAC的集羣適用。
  • --tiller-namespace:將helm 安裝到指定的namespace中;
  • --tiller-image:指定helm鏡像
  • --kube-context:將helm Tiller安裝到特定的kubernetes集羣中;

第一次運行出現問題:

[root@kuber24 helm]# helm init --service-account tiller --tiller-namespace helm-system
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: EOF

這個是由於google的都被牆了,修改Hosts,指定storage.googleapis.com對應的課訪問的IP即可。最新的國內可訪問google的Hosts配置見github項目googlehosts/hostshosts/hosts-files/hosts文件

再次運行init helm命令,成功安裝。

[root@kuber24 helm]# helm init --service-account tiller --tiller-namespace helm-system
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

查看Tiller的Pod狀態時,發現Pod出現錯誤ImagePullBackOff,如下:

[root@kuber24 resources]# kubectl get pods --all-namespaces|grep tiller
helm-system   tiller-deploy-cdcd5dcb5-fqm57          0/1     ImagePullBackOff   0          13m

查看pod的詳細信息kubectl describe pod tiller-deploy-cdcd5dcb5-fqm57 -n helm-system,發現Pod依賴鏡像gcr.io/kubernetes-helm/tiller:v2.11.0

查詢docker hub上是否有人複製過改鏡像,如圖:

[root@kuber24 ~]# docker search tiller:v2.11.0
INDEX       NAME                          DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
docker.io   docker.io/jay1991115/tiller   gcr.io/kubernetes-helm/tiller:v2.11.0           1                    [OK]
docker.io   docker.io/luyx30/tiller       tiller:v2.11.0                                  1                    [OK]
docker.io   docker.io/1017746640/tiller   FROM gcr.io/kubernetes-helm/tiller:v2.11.0      0                    [OK]
docker.io   docker.io/724399396/tiller    gcr.io/kubernetes-helm/tiller:v2.11.0-rc.2...   0                    [OK]
docker.io   docker.io/fengzos/tiller      gcr.io/kubernetes-helm/tiller:v2.11.0           0                    [OK]
docker.io   docker.io/imwower/tiller      tiller from gcr.io/kubernetes-helm/tiller:...   0                    [OK]
docker.io   docker.io/xiaotech/tiller     FROM gcr.io/kubernetes-helm/tiller:v2.11.0      0                    [OK]
docker.io   docker.io/yumingc/tiller      tiller:v2.11.0                                  0                    [OK]
docker.io   docker.io/zhangc476/tiller    gcr.io/kubernetes-helm/tiller/kubernetes-h...   0                    [OK]

同樣使用hub.docker.commirrorgooglecontainers加速的google鏡像,然後改鏡像的名字。每個Node節點都需要安裝。

安裝問題

鏡像問題

鏡像下載不下來:使用他人同步到docker hub上面的鏡像;使用docker search $NAME:$VERSION

安裝helm提示repo連接不上

使用Hosts翻牆實現。

下載Chart問題

問題提示:

[root@kuber24 ~]# helm install nginx --tiller-namespace helm-system --namespace kube-public
Error: failed to download "nginx" (hint: running `helm repo update` may help)

使用helm repo update 後,並沒有解決問題。

如下:

[root@kuber24 ~]# helm install nginx --tiller-namespace helm-system --namespace kube-public
Error: failed to download "nginx" (hint: running `helm repo update` may help)
[root@kuber24 ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
[root@kuber24 ~]# helm install nginx --tiller-namespace helm-system --namespace kube-public
Error: failed to download "nginx" (hint: running `helm repo update` may help)

可能的原因:

  1. 沒有nginx這個chart:使用helm search nginx 查詢nginx chart信息。
  2. 網絡連接問題,下載不了。這種情況下,等待一定超時後,helm會提示。

使用

添加常見的repo

添加aliyun, github 和官方incubator charts repository。

helm add repo gitlab https://charts.gitlab.io/
helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/

日常使用

本小結的$NAME表示helm的repo/chart_name。

  1. 查詢charts: helm search $NAME
  2. 查看release的列表:helm ls [--tiller-namespace $TILLER_NAMESPACE]
  3. 查詢package 信息: helm inspect $NAME
  4. 查詢package支持的選項:helm inspect values $NAME
  5. 部署chart:helm install $NAME [--tiller-namespace $TILLER_NAMESPACE] [--namespace $CHART_DEKPLOY_NAMESPACE]
  6. 刪除release:helm delete $RELEASE_NAME [--purge] [--tiller-namespace $TILLER_NAMESPACE]
  7. 更新:helm upgrade --set $PARAM_NAME=$PARAM_VALUE $RELEASE_NAME $NAME [--tiller-namespace $TILLER_NAMESPACE]
  8. 回滾:helm rollback $RELEASE_NAME $REVERSION [--tiller-namespace $TILLER_NAMESPACE]

刪除release時,不使用--purge參數,會僅撤銷pod部署,不會刪除release的基本信息,不能release同名的chart。

部署RELEASE

部署mysql時,查詢參數並配置相應的參數。

查詢可配置的參數:

[root@kuber24 charts]# helm inspect values aliyun/mysql
## mysql image version
## ref: https://hub.docker.com/r/library/mysql/tags/
##
image: "mysql"
imageTag: "5.7.14"

## Specify password for root user
##
## Default: random 10 character string
# mysqlRootPassword: testing

## Create a database user
##
# mysqlUser:
# mysqlPassword:

## Allow unauthenticated access, uncomment to enable
##
# mysqlAllowEmptyPassword: true

## Create a database
##
# mysqlDatabase:

## Specify an imagePullPolicy (Required)
## It's recommended to change this to 'Always' if the image tag is 'latest'
## ref: http://kubernetes.io/docs/user-guide/images/#updating-images
##
imagePullPolicy: IfNotPresent

livenessProbe:
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5
  successThreshold: 1
  failureThreshold: 3

readinessProbe:
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 1
  successThreshold: 1
  failureThreshold: 3

## Persist data to a persistent volume
persistence:
  enabled: true
  ## database data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"
  accessMode: ReadWriteOnce
  size: 8Gi

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  requests:
    memory: 256Mi
    cpu: 100m

# Custom mysql configuration files used to override default mysql settings
configurationFiles:
#  mysql.cnf: |-
#    [mysqld]
#    skip-name-resolve


## Configure the service
## ref: http://kubernetes.io/docs/user-guide/services/
service:
  ## Specify a service type
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
  type: ClusterIP
  port: 3306
  # nodePort: 32000

例如我們需要配置mysql的root密碼,那麼可以直接使用--set參數設置選項,例如roo密碼設置:--set mysqlRootPassword=hgfgood

通過mysql選項的說明中persistence參數,可以看出mysql 需要持久化存儲,所以需要給kubernetes配置持久化存儲卷PV。

創建PV:

[root@kuber24 resources]# cat local-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv
  namespace: kube-public
spec:
  capacity:
    storage: 30Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  hostPath:
    path: /home/k8s

完整的release chart命令如下:helm install --name mysql-dev --set mysqlRootPassword=hgfgood aliyun/mysql --tiller-namespace helm-system --namespace kube-public

查看已經release的chart列表:

[root@kuber24 charts]# helm ls --tiller-namespace=helm-system
NAME     	REVISION	UPDATED                 	STATUS  	CHART      	APP VERSION	NAMESPACE
mysql-dev	1       	Fri Oct 26 10:35:55 2018	DEPLOYED	mysql-0.3.5	           	kube-public

正常情況下,dashboard監控的情況如下圖:

正常的helm部署

運行此mysql chart 需要busybox鏡像,偶爾會出現下圖所示的問題,這是docker默認訪問國外的docker hub導致的。需要先下載busybox鏡像。

部署mysql chart 出現busybox鏡像下載失敗問題

更新和回滾

上例中,安裝完mysql,使用的root密碼爲hgfgood。本例中將其更新爲hgf然後回滾到原始的密碼hgfgood

查詢mysql安裝後的密碼:

[root@kuber24 charts]# kubectl get secret --namespace kube-public mysql-dev-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo
hgfgood

更新mysql的root密碼,helm upgrade --set mysqlRootPassword=hgf mysql-dev mysql --tiller-namespace helm-system

更新完成後再次查詢mysql的root用戶密碼

[root@kuber24 charts]# kubectl get secret --namespace kube-public mysql-dev-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo
hgf

查看RELEASE的信息:

[root@kuber24 charts]# helm ls --tiller-namespace helm-system
NAME     	REVISION	UPDATED                 	STATUS  	CHART      	APP VERSION	NAMESPACE
mysql-dev	2       	Fri Oct 26 11:26:48 2018	DEPLOYED	mysql-0.3.5	           	kube-public

查看REVISION,可以目前mysql-dev有兩個版本。

回滾到版本1

[root@kuber24 charts]# helm rollback mysql-dev 1 --tiller-namespace helm-system
Rollback was a success! Happy Helming!
[root@kuber24 charts]# kubectl get secret --namespace kube-public mysql-dev-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo
hgfgood

通過上述輸出可以發現RELEASE已經回滾。

常見問題

  1. Error: could not find tiller,使用helm client,需要與tiller 交互時,需要制定tiller的namespace,使用參數--tiller-namespace helm-system,此參數默認時kube-system

境內chart下載失敗的問題

由於網絡問題下載會失敗的問題,例如:

[root@kuber24 ~]# helm install stable/mysql --tiller-namespace helm-system --namespace kube-public  --debug
[debug] Created tunnel using local port: '32774'

[debug] SERVER: "127.0.0.1:32774"

[debug] Original chart version: ""
Error: Get https://kubernetes-charts.storage.googleapis.com/mysql-0.10.2.tgz: read tcp 10.20.13.24:56594->216.58.221.240:443: read: connection reset by peer
  1. 進入本地charts保存的目錄
  2. 使用阿里雲fetch對應的chart

例如 安裝mysql。

helm fetch aliyun/mysql --untar
[root@kuber24 charts]# ls
mysql
[root@kuber24 charts]# ls mysql/
Chart.yaml  README.md  templates  values.yaml

然後再次運行helm install 安裝mysql chart。

helm install mysql --tiller-namespace helm-system --namespace kube-public

可以使用--debug參數,打開debug信息。

[root@kuber24 charts]# helm install mysql --tiller-namespace helm-system --namespace kube-public --debug
[debug] Created tunnel using local port: '41905'

[debug] SERVER: "127.0.0.1:41905"

[debug] Original chart version: ""
[debug] CHART PATH: /root/Downloads/charts/mysql

NAME:   kissable-bunny
REVISION: 1
RELEASED: Thu Oct 25 20:20:23 2018
CHART: mysql-0.3.5
USER-SUPPLIED VALUES:
{}

COMPUTED VALUES:
configurationFiles: null
image: mysql
imagePullPolicy: IfNotPresent
imageTag: 5.7.14
livenessProbe:
  failureThreshold: 3
  initialDelaySeconds: 30
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
persistence:
  accessMode: ReadWriteOnce
  enabled: true
  size: 8Gi
readinessProbe:
  failureThreshold: 3
  initialDelaySeconds: 5
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 1
resources:
  requests:
    cpu: 100m
    memory: 256Mi
service:
  port: 3306
  type: ClusterIP

HOOKS:
MANIFEST:

---
# Source: mysql/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: kissable-bunny-mysql
  labels:
    app: kissable-bunny-mysql
    chart: "mysql-0.3.5"
    release: "kissable-bunny"
    heritage: "Tiller"
type: Opaque
data:

  mysql-root-password: "TzU5U2tScHR0Sg=="


  mysql-password: "RGRXU3Ztb3hQNw=="
---
# Source: mysql/templates/pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: kissable-bunny-mysql
  labels:
    app: kissable-bunny-mysql
    chart: "mysql-0.3.5"
    release: "kissable-bunny"
    heritage: "Tiller"
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "8Gi"
---
# Source: mysql/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: kissable-bunny-mysql
  labels:
    app: kissable-bunny-mysql
    chart: "mysql-0.3.5"
    release: "kissable-bunny"
    heritage: "Tiller"
spec:
  type: ClusterIP
  ports:
  - name: mysql
    port: 3306
    targetPort: mysql
  selector:
    app: kissable-bunny-mysql
---
# Source: mysql/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kissable-bunny-mysql
  labels:
    app: kissable-bunny-mysql
    chart: "mysql-0.3.5"
    release: "kissable-bunny"
    heritage: "Tiller"
spec:
  template:
    metadata:
      labels:
        app: kissable-bunny-mysql
    spec:
      initContainers:
      - name: "remove-lost-found"
        image: "busybox:1.25.0"
        imagePullPolicy: "IfNotPresent"
        command:  ["rm", "-fr", "/var/lib/mysql/lost+found"]
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
      containers:
      - name: kissable-bunny-mysql
        image: "mysql:5.7.14"
        imagePullPolicy: "IfNotPresent"
        resources:
          requests:
            cpu: 100m
            memory: 256Mi

        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: kissable-bunny-mysql
              key: mysql-root-password
        - name: MYSQL_PASSWORD
          valueFrom:
            secretKeyRef:
              name: kissable-bunny-mysql
              key: mysql-password
        - name: MYSQL_USER
          value: ""
        - name: MYSQL_DATABASE
          value: ""
        ports:
        - name: mysql
          containerPort: 3306
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "mysqladmin ping -u root -p${MYSQL_ROOT_PASSWORD}"
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "mysqladmin ping -u root -p${MYSQL_ROOT_PASSWORD}"
          initialDelaySeconds: 5
          periodSeconds: 10
          timeoutSeconds: 1
          successThreshold: 1
          failureThreshold: 3
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: kissable-bunny-mysql
LAST DEPLOYED: Thu Oct 25 20:20:23 2018
NAMESPACE: kube-public
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME                                  READY  STATUS   RESTARTS  AGE
kissable-bunny-mysql-c7df69d65-lmjzn  0/1    Pending  0         0s

==> v1/Secret

NAME                  AGE
kissable-bunny-mysql  1s

==> v1/PersistentVolumeClaim
kissable-bunny-mysql  1s

==> v1/Service
kissable-bunny-mysql  1s

==> v1beta1/Deployment
kissable-bunny-mysql  1s


NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
kissable-bunny-mysql.kube-public.svc.cluster.local

To get your root password run:

    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace kube-public kissable-bunny-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)

To connect to your database:

1. Run an Ubuntu pod that you can use as a client:

    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il

2. Install the mysql client:

    $ apt-get update && apt-get install mysql-client -y

3. Connect using the mysql cli, then provide your password:
    $ mysql -h kissable-bunny-mysql -p

To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306

    # Execute the following commands to route the connection:
    export POD_NAME=$(kubectl get pods --namespace kube-public -l "app=kissable-bunny-mysql" -o jsonpath="{.items[0].metadata.name}")
    kubectl port-forward $POD_NAME 3306:3306

    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}

打包Chart

  • [ ] 詳細的打包實驗。
# 創建一個新的 chart
helm create hello-chart

# validate chart
helm lint

# 打包 chart 到 tgz
helm package hello-chart

參考

  1. Helm 用戶指南

最後

感謝大家的閱讀,如果有什麼問題️,請您留言。

歡迎大家來我的github,查看更多關於kubernetes的個人經驗,共同進步。

歡迎轉載,轉載請註明出處!謝謝!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章