Ubuntu18.04,CUDA 10.0,cuDNN 7.4,tensorflow1.13.1環境配置

         一開始接觸到這個配置的時候說實話是十分茫然的,花了很久的時間查資料,好在最後安裝成功了!下面是本人安裝的過程。  

       首先,在開始之前先看需要裝什麼版本的CUNA,cuDNN,以及tensorflow-gpu的版本。

步驟一:

       檢查GPU驅動是否完成:

~$ nvidia-smi

結果如下表示驅動安裝成功:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.40.04    Driver Version: 418.40.04    CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla P4            Off  | 00000000:03:00.0 Off |                    0 |
| N/A   37C    P8     6W /  75W |      0MiB /  7611MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

  查看服務器的當前個數:

$ lspci|grep -i nvidia

 查看結果爲:

03:00.0 3D controller: NVIDIA Corporation GP104GL [Tesla P4] (rev a1)

步驟二:我這裏是已經安裝了GPU驅動,所以安裝的步驟上面沒有介紹。接下來是查看CUDA,cuDNN和tensorflow-gpu的版本對應情況。注意一定要對應,才能安裝成功。

      在這裏,我安裝的是第一行的tensorflow-gpu 1.13.1,CUNA10.0,cuDNN7.4.

步驟三:CUDA的安裝

下載鏈接:https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1804&target_type=runfilelocal

根據如下圖的選擇下載CUNA10.0:

 這裏將安裝包和補丁都下載,補丁後面後用到。

 下載完成後,用命令進行安裝:

sudo sh cuda_10.0.130_410.48_linux.run

然後會出現以下的安裝內容,如下進行選擇:

Do you accept the previously read EULA?
accept/decline/quit: accept

Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 410.48?
(y)es/(n)o/(q)uit: n

Install the CUDA 10.0 Toolkit?
(y)es/(n)o/(q)uit: y

Enter Toolkit Location
 [ default is /usr/local/cuda-10.0 ]:

Do you want to install a symbolic link at /usr/local/cuda?
(y)es/(n)o/(q)uit: y

Install the CUDA 10.0 Samples?
(y)es/(n)o/(q)uit: y

Enter CUDA Samples Location
 [ default is /home/wanying ]:

接着會出現以下內容:

Installing the CUDA Samples in /home/wanying ...
Copying samples to /home/wanying/NVIDIA_CUDA-10.0_Samples now...
Finished copying samples.

===========
= Summary =
===========

Driver:   Not Selected
Toolkit:  Installed in /usr/local/cuda-10.0
Samples:  Installed in /home/wanying, but missing recommended libraries

Please make sure that
 -   PATH includes /usr/local/cuda-10.0/bin
 -   LD_LIBRARY_PATH includes /usr/local/cuda-10.0/lib64, or, add /usr/local/cuda-10.0/lib64 to /etc/ld.so.conf and run ldconfig as root

To uninstall the CUDA Toolkit, run the uninstall script in /usr/local/cuda-10.0/bin

Please see CUDA_Installation_Guide_Linux.pdf in /usr/local/cuda-10.0/doc/pdf for detailed information on setting up CUDA.

***WARNING: Incomplete installation! This installation did not install the CUDA Driver. A driver of version at least 384.00 is required for CUDA 10.0 functionality to work.
To install the driver using this installer, run the following command, replacing <CudaInstaller> with the name of this run file:
    sudo <CudaInstaller>.run -silent -driver

Logfile is /tmp/cuda_install_45102.log

接下來將之前下載的補丁也安裝:

sudo sh cuda_10.0.130.1_linux.run

最後配置環境變量,首先打開bashrc文件,將路徑添加在後面:

vim ~/.bashrc

接着對bashrc文件進行編輯,添加下面幾句話,第一行爲註釋,不用添加:

# cuda PATH
export  PATH=/usr/local/cuda-10.0/bin:$PATH
export  LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64:$LD_LIBRARY_PATH

到這裏,CUDA的安裝完成,接下來,我們進行測試,看是否安裝成功。先進入到測試樣本的目錄下:

$ cd NVIDIA_CUDA-10.0_Samples/1_Utilities/deviceQuery

然後進行測試;

sudo make
./deviceQuery

出現以下結果,測試通過(全部內容很多,只截取了最後的部分)。CUDA安裝完成!

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.1, CUDA Runtime Version = 10.0, NumDevs = 1
Result = PASS

步驟四:cuDNN的安裝

根據對應的版本,下載鏈接爲:https://developer.nvidia.com/rdp/cudnn-archive。我這裏下載的是cuDNN7.4.2,下載cuDNN Library for Linux,

下載完cudnn-10.0-linux-x64-v7.4.2.24之後進行解壓,解壓之後是cuda文件夾,進入cuda/include目錄,在命令行進行如下操作:

sudo cp cudnn.h /usr/local/cuda/include/ #複製頭文件

再進入cuda/lib64目錄下的動態文件進行復制和鏈接(根據自己的版本號修改libcudnn.so):

sudo cp lib* /usr/local/cuda/lib64/ # 複製動態鏈接庫 
cd /usr/local/cuda/lib64/ 
sudo rm -rf libcudnn.so libcudnn.so.7 # 刪除原有動態文件 
sudo ln -s libcudnn.so.7.4.2 libcudnn.so.7 # 生成軟銜接 
sudo ln -s libcudnn.so.7 libcudnn.so # 生成軟鏈接

然後對cuDNN是否安裝成功進行測試,先下載三個包,鏈接與下載cuDNN安裝包的鏈接一樣。如下圖:

下載完成後,用命令安裝(我裝的時候沒注意順序出錯了,不確定是不是順序的原因,最好是按照順序來裝):

$ sudo dpkg -i libcudnn7_7.4.2.24-1+cuda10.0_amd64.deb
$ sudo dpkg -i libcudnn7-dev_7.4.2.24-1+cuda10.0_amd64.deb
$ sudo dpkg -i libcudnn7-doc_7.4.2.24-1+cuda10.0_amd64.deb

安裝完成後,

cp -r /usr/src/cudnn_samples_v7/mnistCUDNN $HOME

cd $HOME/mnistCUDNN

make clean && make

./mnistCUDNN

運行結果如下,表示成功:


Testing half precision (math in single precision)
Loading image data/one_28x28.pgm
Performing forward propagation ...
Testing cudnnGetConvolutionForwardAlgorithm ...
Fastest algorithm is Algo 1
Testing cudnnFindConvolutionForwardAlgorithm ...
^^^^ CUDNN_STATUS_SUCCESS for Algo 0: 0.024576 time requiring 0 memory
^^^^ CUDNN_STATUS_SUCCESS for Algo 1: 0.032768 time requiring 3464 memory
^^^^ CUDNN_STATUS_SUCCESS for Algo 2: 0.045056 time requiring 28800 memory
^^^^ CUDNN_STATUS_SUCCESS for Algo 4: 0.100352 time requiring 207360 memory
^^^^ CUDNN_STATUS_SUCCESS for Algo 7: 0.133120 time requiring 2057744 memory
Resulting weights from Softmax:
0.0000001 1.0000000 0.0000001 0.0000000 0.0000563 0.0000001 0.0000012 0.0000017 0.0000010 0.0000001
Loading image data/three_28x28.pgm
Performing forward propagation ...
Resulting weights from Softmax:
0.0000000 0.0000000 0.0000000 1.0000000 0.0000000 0.0000714 0.0000000 0.0000000 0.0000000 0.0000000
Loading image data/five_28x28.pgm
Performing forward propagation ...
Resulting weights from Softmax:
0.0000000 0.0000008 0.0000000 0.0000002 0.0000000 1.0000000 0.0000154 0.0000000 0.0000012 0.0000006

Result of classification: 1 3 5

Test passed!

步驟五:tensorflow-gpu的安裝

首先,我們要確保系統中安裝瞭如下Python環境:python3, pip3,以及 virtualenv,在命令行查詢相應的版本

python3 --version
pip3 --version
virtualenv --version

如果缺什麼就按照如下命令進行安裝:(缺什麼安裝什麼)

sudo apt update
sudo apt install python3-dev python3-pip
sudo pip3 install -U virtualenv

接下來安裝python虛擬環境,python的虛擬環境用來隔離系統和相應的安裝包,這非常有利於不同版本之間的隔離,總之好處多多,尤其是不同的項目使用不同的軟件版本時,能避免令人頭痛的版本混亂問題,強烈建議安裝虛擬環境。安裝命令如下:

virtualenv --system-site-packages -p python3 ./venv  # 創建python3虛擬環境
source ./venv/bin/activate # 激活虛擬環境

安裝Tensorflow

激活虛擬環境後, 安裝對應版本的tensorflow-gpu, 我的版本是:tensorflow-gpu 1.13.1, 安裝命令如下:

pip install tensorflow-gpu==1.13.1

安裝完成之後,測試是否安裝成功命令如下:

python -c "import tensorflow as tf; tf.enable_eager_execution(); print(tf.reduce_sum(tf.rando
m_normal([1000, 1000])))"

運行結果如下:

2019-07-01 18:25:42.769805: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-07-01 18:25:42.909912: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x182b620 executing computations on platform CUDA. Devices:
2019-07-01 18:25:42.909982: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): Tesla P4, Compute Capability 6.1
2019-07-01 18:25:42.932521: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2399945000 Hz
2019-07-01 18:25:42.937288: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x1eef3d0 executing computations on platform Host. Devices:
2019-07-01 18:25:42.937345: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
2019-07-01 18:25:42.938332: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: Tesla P4 major: 6 minor: 1 memoryClockRate(GHz): 1.1135
pciBusID: 0000:03:00.0
totalMemory: 7.43GiB freeMemory: 7.32GiB
2019-07-01 18:25:42.938377: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-07-01 18:25:42.940093: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-07-01 18:25:42.940124: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0
2019-07-01 18:25:42.940138: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N
2019-07-01 18:25:42.941039: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7123 MB memory) -> physical GPU (device: 0, name: Tesla P4, pci bus id: 0000:03:00.0, compute capability: 6.1)
tf.Tensor(-685.63104, shape=(), dtype=float32)

至此,完成tensorflow-gpu版本的配置。 可以愉快玩耍了!!!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章