使用 GPU-Operator 與 KubeSphere 簡化深度學習訓練與 GPU 監控

本文將從 GPU-Operator 概念介紹、安裝部署、深度訓練測試應用部署,以及在 KubeSphere 使用自定義監控面板對接 GPU 監控,從原理到實踐,逐步淺析介紹與實踐 GPU-Operator。

GPU-Operator簡介

衆所周知,Kubernetes 平臺通過設備插件框架提供對特殊硬件資源的訪問,如 NVIDIA GPU、網卡、Infiniband 適配器和其他設備。然而,使用這些硬件資源配置和管理節點需要配置多個軟件組件,如驅動程序、容器運行時或其他依賴庫,這是困難的和容易出錯的。

NVIDIA GPU Operator 由 Nvidia 公司開源,利用了 Kubernetes 平臺的 Operator 控制模式,方便地自動化集成管理 GPU 所需的 NVIDIA 設備組件,有效地解決了上述GPU設備集成的痛點。這些組件包括 NVIDIA 驅動程序(用於啓用 CUDA )、用於 GPU 的 Kubernetes 設備插件、NVIDIA Container 運行時、自動節點標籤、基於 DCGM 的監控等。

NVIDIA GPU Operator 的不僅實現了設備和組件一體化集成,而且它管理 GPU 節點就像管理 CPU 節點一樣方便,無需單獨爲 GPU 節點提供特殊的操作系統。值得關注的是,它將GPU各組件容器化,提供 GPU 能力,非常適合快速擴展和管理規模 GPU 節點。當然,對於已經爲GPU組件構建了特殊操作系統的應用場景來說,顯得並不是那麼合適了。

GPU-Operator 架構原理

前文提到,NVIDIA GPU Operator 管理 GPU 節點就像管理 CPU 節點一樣方便,那麼它是如何實現這一能力呢?

我們一起來看看 GPU-Operator 運行時的架構圖:

通過圖中的描述,我們可以知道, GPU-Operator 是通過實現了 Nvidia 容器運行時,以runC作爲輸入,在runCpreStart hook中注入了一個名叫nvidia-container-toolkit的腳本,該腳本調用libnvidia-container CLI設置一系列合適的flags,使得容器運行後具有 GPU 能力。

GPU-Operator 安裝說明

前提條件

在安裝 GPU Operator 之前,請配置好安裝環境如下:

  • 所有節點不需要預先安裝NVIDIA組件(driver,container runtime,device plugin);
  • 所有節點必須配置Docker,cri-o, 或者containerd.對於 docker 來說,可以參考這裏
  • 如果使用HWE內核(e.g. kernel 5.x) 的 Ubuntu 18.04 LTS 環境下,需要給nouveau driver添加黑名單,需要更新initramfs
$ sudo vim /etc/modprobe.d/blacklist.conf # 在尾部添加黑名單
blacklist nouveau
options nouveau modeset=0
$ sudo update-initramfs -u
$ reboot
$ lsmod | grep nouveau # 驗證nouveau是否已禁用
$ cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c  #本文測試時處理器架構代號爲Broadwell
16 Intel Core Processor (Broadwell)
  • 節點發現(NFD) 需要在每個節點上配置,默認情況會直接安裝,如果已經配置,請在Helm chart變量設置nfd.enabledfalse, 再安裝;
  • 如果使用 Kubernetes 1.13和1.14, 需要激活 KubeletPodResources

支持的linux版本

OS Name / Version Identifier amd64 / x86_64 ppc64le arm64 / aarch64
Amazon Linux 1 amzn1 X
Amazon Linux 2 amzn2 X
Amazon Linux 2017.09 amzn2017.09 X
Amazon Linux 2018.03 amzn2018.03 X
Open Suse Leap 15.0 sles15.0 X
Open Suse Leap 15.1 sles15.1 X
Debian Linux 9 debian9 X
Debian Linux 10 debian10 X
Centos 7 centos7 X X
Centos 8 centos8 X X X
RHEL 7.4 rhel7.4 X X
RHEL 7.5 rhel7.5 X X
RHEL 7.6 rhel7.6 X X
RHEL 7.7 rhel7.7 X X
RHEL 8.0 rhel8.0 X X X
RHEL 8.1 rhel8.1 X X X
RHEL 8.2 rhel8.2 X X X
Ubuntu 16.04 ubuntu16.04 X X
Ubuntu 18.04 ubuntu18.04 X X X
Ubuntu 20.04 ubuntu20.04 X X X

支持的容器運行時

OS Name / Version amd64 / x86_64 ppc64le arm64 / aarch64
Docker 18.09 X X X
Docker 19.03 X X X
RHEL/CentOS 8 podman X
CentOS 8 Docker X
RHEL/CentOS 7 Docker X

安裝doker環境

可參考 Docker 官方文檔

安裝NVIDIA Docker

配置 stable 倉庫和 GPG key :

$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
&& curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

更新軟件倉庫後安裝nvidia-docker2並添加運行時配置:

$ sudo apt-get update
$ sudo apt-get install -y nvidia-docker2
-----
What would you like to do about it ?  Your options are:
Y or I  : install the package maintainer's version
N or O  : keep your currently-installed version
D     : show the differences between the versions
Z     : start a shell to examine the situation
-----
# 初次安裝,遇到以上交互式問題可選擇N
# 如果選擇Y會覆蓋你的一些默認配置
# 選擇N後,將以下配置添加到etc/docker/daemon.json
{
  "runtimes": {
      "nvidia": {
          "path": "/usr/bin/nvidia-container-runtime",
          "runtimeArgs": []
      }
  }
}

重啓docker:

$ sudo systemctl restart docker

安裝Helm

$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 \
   && chmod 700 get_helm.sh \
   && ./get_helm.sh

添加helm倉庫

$ helm repo add nvidia https://nvidia.github.io/gpu-operator \
   && helm repo update

安裝 NVIDIA GPU Operator

docker as runtime

$ kubectl create ns gpu-operator-resources
$ helm install gpu-operator nvidia/gpu-operator -n gpu-operator-resources --wait

如果需要指定驅動版本,可參考如下:

$ helm install gpu-operator nvidia/gpu-operator -n gpu-operator-resources \
--set driver.version="450.80.02"

crio as runtime

helm install gpu-operator nvidia/gpu-operator -n gpu-operator-resources\
   --set operator.defaultRuntime=crio

containerd as runtime

helm install gpu-operator nvidia/gpu-operator -n gpu-operator-resources\
   --set operator.defaultRuntime=containerd
   
Furthermore, when setting containerd as the defaultRuntime the following options are also available:
toolkit:
  env:
  - name: CONTAINERD_CONFIG
    value: /etc/containerd/config.toml
  - name: CONTAINERD_SOCKET
    value: /run/containerd/containerd.sock
  - name: CONTAINERD_RUNTIME_CLASS
    value: nvidia
  - name: CONTAINERD_SET_AS_DEFAULT
    value: true

由於安裝的鏡像比較大,所以初次安裝過程中可能會出現超時的情形,請檢查你的鏡像是否在拉取中!可以考慮使用離線安裝解決該類問題,參考離線安裝的鏈接。

使用 values.yaml 安裝

$ helm install gpu-operator nvidia/gpu-operator -n gpu-operator-resources -f values.yaml

考慮離線安裝

應用部署

檢查已部署 operator 服務狀態

檢查 pods 狀態

$ kubectl get pods -n gpu-operator-resources
NAME                                                          READY   STATUS      RESTARTS   AGE
gpu-feature-discovery-4gk78                                   1/1     Running     0          35s
gpu-operator-858fc55fdb-jv488                                 1/1     Running     0          2m52s
gpu-operator-node-feature-discovery-master-7f9ccc4c7b-2sg6r   1/1     Running     0          2m52s
gpu-operator-node-feature-discovery-worker-cbkhn              1/1     Running     0          2m52s
gpu-operator-node-feature-discovery-worker-m8jcm              1/1     Running     0          2m52s
nvidia-container-toolkit-daemonset-tfwqt                      1/1     Running     0          2m42s
nvidia-dcgm-exporter-mqns5                                    1/1     Running     0          38s
nvidia-device-plugin-daemonset-7npbs                          1/1     Running     0          53s
nvidia-device-plugin-validation                               0/1     Completed   0          49s
nvidia-driver-daemonset-hgv6s                                 1/1     Running     0          2m47s

檢查節點資源是否處於可分配

$ kubectl describe node worker-gpu-001
---
Allocatable:
  cpu:                15600m
  ephemeral-storage:  82435528Ki
  hugepages-2Mi:      0
  memory:             63649242267
  nvidia.com/gpu:     1  #check here
  pods:               110
---

部署官方文檔中的兩個實例

實例一

$ cat cuda-load-generator.yaml
apiVersion: v1
kind: Pod
metadata:
   name: dcgmproftester
spec:
   restartPolicy: OnFailure
   containers:
   - name: dcgmproftester11
   image: nvidia/samples:dcgmproftester-2.0.10-cuda11.0-ubuntu18.04
   args: ["--no-dcgm-validation", "-t 1004", "-d 120"]
   resources:
      limits:
         nvidia.com/gpu: 1
   securityContext:
      capabilities:
         add: ["SYS_ADMIN"]
EOF

實例二

$ curl -LO https://nvidia.github.io/gpu-operator/notebook-example.yml
$ cat notebook-example.yml
apiVersion: v1
kind: Service
metadata:
  name: tf-notebook
  labels:
    app: tf-notebook
spec:
  type: NodePort
  ports:
  - port: 80
    name: http
    targetPort: 8888
    nodePort: 30001
  selector:
    app: tf-notebook
---
apiVersion: v1
kind: Pod
metadata:
  name: tf-notebook
  labels:
    app: tf-notebook
spec:
  securityContext:
    fsGroup: 0
  containers:
  - name: tf-notebook
    image: tensorflow/tensorflow:latest-gpu-jupyter
    resources:
      limits:
        nvidia.com/gpu: 1
    ports:
    - containerPort: 8

基於 Jupyter Notebook 應用運行深度學習訓練任務

部署應用

$ kubectl apply -f cuda-load-generator.yaml 
pod/dcgmproftester created
$ kubectl apply -f notebook-example.yml       
service/tf-notebook created
pod/tf-notebook created

查看 GPU 處於已分配狀態:

$ kubectl describe node worker-gpu-001
---
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                1087m (6%)   1680m (10%)
  memory             1440Mi (2%)  1510Mi (2%)
  ephemeral-storage  0 (0%)       0 (0%)
  nvidia.com/gpu     1            1 #check this
Events:              <none>

當有 GPU 任務發佈給平臺時,GPU 資源從可分配狀態轉變爲已分配狀態,安裝任務發佈的先後順序,第二個任務在第一個任務運行結束後開始運行:

$ kubectl get pods --watch
NAME             READY   STATUS    RESTARTS   AGE
dcgmproftester   1/1     Running   0          76s
tf-notebook      0/1     Pending   0          58s
------
NAME             READY   STATUS      RESTARTS   AGE
dcgmproftester   0/1     Completed   0          4m22s
tf-notebook      1/1     Running     0          4m4s

獲取應用端口信息:

$ kubectl get svc # get the nodeport of the svc, 30001
gpu-operator-1611672791-node-feature-discovery   ClusterIP   10.233.10.222   <none>        8080/TCP       12h
kubernetes                                       ClusterIP   10.233.0.1      <none>        443/TCP        12h
tf-notebook                                      NodePort    10.233.53.116   <none>        80:30001/TCP   7m52s

查看日誌,獲取登錄口令:

$ kubectl logs tf-notebook 
[I 21:50:23.188 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
[I 21:50:23.390 NotebookApp] Serving notebooks from local directory: /tf
[I 21:50:23.391 NotebookApp] The Jupyter Notebook is running at:
[I 21:50:23.391 NotebookApp] http://tf-notebook:8888/?token=3660c9ee9b225458faaf853200bc512ff2206f635ab2b1d9
[I 21:50:23.391 NotebookApp]  or http://127.0.0.1:8888/?token=3660c9ee9b225458faaf853200bc512ff2206f635ab2b1d9
[I 21:50:23.391 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 21:50:23.394 NotebookApp]
   To access the notebook, open this file in a browser:
      file:///root/.local/share/jupyter/runtime/nbserver-1-open.html
   Or copy and paste one of these URLs:
      http://tf-notebook:8888/?token=3660c9ee9b225458faaf853200bc512ff2206f635ab2b1d9
   or http://127.0.0.1:8888/?token=3660c9ee9b225458faaf853200bc512ff2206f635ab2b1d9

運行深度學習任務

進入jupyter notebook 環境後,嘗試進入終端,運行深度學習任務:

進入terminal後拉取tersorflow測試代碼並運行:

與此同時,開啓另外一個終端運行nvidia-smi查看 GPU 監控使用情況:

利用 KubeSphere 自定義監控功能監控 GPU

部署 ServiceMonitor

gpu-operator幫我們提供了nvidia-dcgm-exporter這個exportor, 只需要將它集成到Prometheus的可採集對象中,也就是ServiceMonitor中,我們就能獲取GPU監控數據了:

$ kubectl get pods -n gpu-operator-resources
NAME                                       READY   STATUS      RESTARTS   AGE
gpu-feature-discovery-ff4ng                1/1     Running     2          15h
nvidia-container-toolkit-daemonset-2vxjz   1/1     Running     0          15h
nvidia-dcgm-exporter-pqwfv                 1/1     Running     0          5h27m #here
nvidia-device-plugin-daemonset-42n74       1/1     Running     0          5h27m
nvidia-device-plugin-validation            0/1     Completed   0          5h27m
nvidia-driver-daemonset-dvd9r              1/1     Running     3          15h

可以構建一個busybox查看該exporter暴露的指標:

$ kubectl get svc -n gpu-operator-resources
NAME                                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
gpu-operator-node-feature-discovery   ClusterIP   10.233.54.111   <none>        8080/TCP   56m
nvidia-dcgm-exporter                  ClusterIP   10.233.53.196   <none>        9400/TCP   54m
$ kubectl exec -it busybox-sleep -- sh
$ wget http://nvidia-dcgm-exporter.gpu-operator-resources:9400/metrics
$ cat metrics
----
DCGM_FI_DEV_SM_CLOCK{gpu="0",UUID="GPU-eeff7856-475a-2eb7-6408-48d023d9dd28",device="nvidia0",container="tf-notebook",namespace="default",pod="tf-notebook"} 405
DCGM_FI_DEV_MEM_CLOCK{gpu="0",UUID="GPU-eeff7856-475a-2eb7-6408-48d023d9dd28",device="nvidia0",container="tf-notebook",namespace="default",pod="tf-notebook"} 715
DCGM_FI_DEV_GPU_TEMP{gpu="0",UUID="GPU-eeff7856-475a-2eb7-6408-48d023d9dd28",device="nvidia0",container="tf-notebook",namespace="default",pod="tf-notebook"} 30
----

查看nvidia-dcgm-exporter暴露的svcep

$ kubectl describe svc nvidia-dcgm-exporter -n gpu-operator-resources
Name:                     nvidia-dcgm-exporter
Namespace:                gpu-operator-resources
Labels:                   app=nvidia-dcgm-exporter
Annotations:              prometheus.io/scrape: true
Selector:                 app=nvidia-dcgm-exporter
Type:                     NodePort
IP:                       10.233.28.200
Port:                     gpu-metrics  9400/TCP
TargetPort:               9400/TCP
NodePort:                 gpu-metrics  31129/TCP
Endpoints:                10.233.84.54:9400
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

配置ServiceMonitor定義清單:

$ cat custom/gpu-servicemonitor.yaml 
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: nvidia-dcgm-exporter
  namespace: gpu-operator-resources 
  labels:
     app: nvidia-dcgm-exporter
spec:
  jobLabel: nvidia-gpu
  endpoints:
  - port: gpu-metrics
    interval: 15s
  selector:
    matchLabels:
      app: nvidia-dcgm-exporter
  namespaceSelector:
    matchNames:
    - gpu-operator-resources
$ kubectl apply -f custom/gpu-servicemonitor.yaml

檢查 GPU 指標是否被採集到(可選)

servicemonitor提交給kubesphere平臺後,通過暴露prometheus-k8sNodePort,我們可以在PrometheusUI上驗證一下是否採集到的相關指標:

創建 KubeSphere GPU 自定義監控面板

KubeSphere 3.0

如果部署的 KubeSphere 版本是KubeSphere 3.0,需要簡單地配置以下幾個步驟,便可順利完成可觀察性監控。

首先, 登錄kubsphere console後,創建一個企業空間名稱爲ks-monitoring-demo, 名稱可按需創建;

其次,需要將ServiceMonitor所在的目標名稱空間gpu-operator-resources分配爲已存在的企業空間中,以便納入監控。

最後,進入目標企業空間,在納管的項目找到gpu-operator-resources, 點擊後找到可自定義監控界面, 即可添加自定義監控。

後續版本

後續版本可選擇添加集羣監控

創建自定義監控

下載dashboard以及配置namespace:

$ curl -LO https://raw.githubusercontent.com/kubesphere/monitoring-dashboard/master/contrib/gallery/nvidia-gpu-dcgm-exporter-dashboard.yaml
$ cat nvidia-gpu-dcgm-exporter-dashboard.yaml
----
apiVersion: monitoring.kubesphere.io/v1alpha1
kind: Dashboard
metadata:
  name: nvidia-dcgm-exporter-dashboard-rev1
  namespace: gpu-operator-resources  # check here
spec:
-----

可以直接命令行apply或者在自定義監控面板中選擇編輯模式進行導入:

正確導入後:

在上面創建的jupyter notebook運行深度學習測試任務後,可以明顯地觀察到相關GPU指標變化:

卸載

$ helm list -n gpu-operator-resources
NAME            NAMESPACE               REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
gpu-operator    gpu-operator-resources  1               2021-02-20 11:50:56.162559286 +0800 CST deployed        gpu-operator-1.5.2      1.5.2     
$ helm uninstall gpu-operator -n gpu-operator-resources

重啓無法使用 GPU

關於已部署正常運行的gpu-operator和AI應用的集羣,重啓GPU主機後會出現沒法用上 GPU 的情況,極有可能是因爲插件還沒加載,應用優先進行了載入,就會導致這種問題。這時,只需要優先保證插件運行正常,然後重新部署應用即可。

GPU-Operator 常見問題

GPU-Operator 重啓後無法使用

答:關於已部署正常運行的gpu-operator和 AI 應用的集羣,重啓 GPU 主機後會出現沒法用上 GPU 的情況,極有可能是因爲插件還沒加載,應用優先進行了載入,就會導致這種問題。這時,只需要優先保證插件運行正常,然後重新部署應用即可。

Nvidia k8s-device-plugin 與 GPU-Operator 方案對比?

我之前針對GPU使用的是 https://github.com/NVIDIA/k8s-device-pluginhttps://github.com/NVIDIA/gpu-monitoring-tools 相結合的方案來監控 GPU,請問這個方案與 GPU-Operator的方案相比,孰優孰劣一些?

答:個人認爲 GPU-Operator 更簡單易用,其自帶 GPU 注入能力不需要構建專用的 OS,並且支持節點發現與可插拔,能夠自動化集成管理 GPU 所需的 NVIDIA 設備組件,相對來說還是很省事的。

有沒有 KubeSphere 自定義監控的詳細使用教程?

答:可以參考 KubeSphere 官方文檔來使用自定義監控。

參考資料

官方代碼倉庫

官方文檔

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章