ONNX Runtime: ubutnu16.04編譯
1. 前言
ONNX Runtime是什麼?
ONNX Runtime是適用於Linux,Windows和Mac上ONNX格式的機器學習模型的高性能推理引擎.
爲什麼要用ONNX Runtime?
因爲訓練的模型要用啊,辛辛苦苦採集了數據,訓練了模型,結果只能在benchmark中拿個名次是不是有點虧呢?如果能在實際場景中應用,是不是很棒呢,當然對模型要求會更高了,畢竟對模型要求泛化能力更強,模型參數量更小,精度還得保持住,有的時候剛拿到任務的時候,總有一種"mission impossible"的感覺,好在老大很好,也就step by step的完成了.扯遠了…
2. ubuntu16.04編譯ONNX Runtime
a) 依賴項
首先一些依賴項,因爲LZ有些庫是爲了版本匹配,是源碼編譯的,所以並沒有全部使用apt-get進行安裝
PACKAGE_LIST="autotools-dev \
automake \
build-essential \
git apt-transport-https apt-utils \
ca-certificates \
pkg-config \
wget \
zlib1g \
zlib1g-dev \
libssl-dev \
curl libcurl4-openssl-dev \
autoconf \
sudo \
gfortran \
python3-dev \
language-pack-en \
libopenblas-dev \
liblttng-ust0 \
libcurl3 \
libssl1.0.0 \
libkrb5-3 \
libicu55 \
libtinfo-dev \
libtool \
aria2 \
bzip2 \
unzip \
zip \
rsync libunwind8 libpng16-dev libexpat1-dev \
python3-setuptools python3-numpy python3-wheel python python3-pip python3-pytest \
libprotobuf-dev libprotobuf9v5 protobuf-compiler \
openjdk-8-jdk"
小黑板畫重點: 說一下這個庫,如果不安裝,編譯test的時候是會報錯的,吃過這個虧!
language-pack-en
運行下面的命令
locale-gen en_US.UTF-8
update-locale LANG=en_US.UTF-8
b) git clone對應分支
git clone --recursive https://github.com/Microsoft/onnxruntime -b your_branch
cd onnxruntime
git submodule update --init --recursive
2000 years later!
c) 進行編譯
需要根據自己的設置進行編譯
具體的能夠設置的參數如下:
2020-04-24 03:26:15,615 Build [DEBUG] - Running subprocess in '/home/felaim/Documents/code/onnxruntime/build/Linux/Release'
['/usr/local/bin/cmake', '/home/felaim/Documents/code/onnxruntime/cmake', '-Donnxruntime_RUN_ONNX_TESTS=OFF', '-Donnxruntime_GENERATE_TEST_REPORTS=ON', '-Donnxruntime_DEV_MODE=ON', '-DPYTHON_EXECUTABLE=/usr/bin/python3', '-Donnxruntime_USE_CUDA=ON', '-Donnxruntime_USE_NSYNC=OFF', '-Donnxruntime_CUDNN_HOME=/usr/lib/x86_64-linux-gnu/', '-Donnxruntime_USE_AUTOML=OFF', '-Donnxruntime_CUDA_HOME=/usr/local/cuda', '-Donnxruntime_USE_JEMALLOC=OFF', '-Donnxruntime_USE_MIMALLOC=OFF', '-Donnxruntime_ENABLE_PYTHON=OFF', '-Donnxruntime_BUILD_CSHARP=OFF', '-Donnxruntime_BUILD_JAVA=OFF', '-Donnxruntime_BUILD_SHARED_LIB=ON', '-Donnxruntime_USE_EIGEN_FOR_BLAS=ON', '-Donnxruntime_USE_OPENBLAS=OFF', '-Donnxruntime_USE_DNNL=OFF', '-Donnxruntime_USE_MKLML=OFF', '-Donnxruntime_USE_GEMMLOWP=OFF', '-Donnxruntime_USE_NGRAPH=OFF', '-Donnxruntime_USE_OPENVINO=OFF', '-Donnxruntime_USE_OPENVINO_MYRIAD=OFF', '-Donnxruntime_USE_OPENVINO_GPU_FP32=OFF', '-Donnxruntime_USE_OPENVINO_GPU_FP16=OFF', '-Donnxruntime_USE_OPENVINO_CPU_FP32=OFF', '-Donnxruntime_USE_OPENVINO_VAD_M=OFF', '-Donnxruntime_USE_OPENVINO_VAD_F=OFF', '-Donnxruntime_USE_NNAPI=OFF', '-Donnxruntime_USE_OPENMP=ON', '-Donnxruntime_USE_TVM=OFF', '-Donnxruntime_USE_LLVM=OFF', '-Donnxruntime_ENABLE_MICROSOFT_INTERNAL=OFF', '-Donnxruntime_USE_BRAINSLICE=OFF', '-Donnxruntime_USE_NUPHAR=OFF', '-Donnxruntime_USE_EIGEN_THREADPOOL=OFF', '-Donnxruntime_USE_TENSORRT=ON', '-Donnxruntime_TENSORRT_HOME=path to tensorrt', '-Donnxruntime_CROSS_COMPILING=OFF', '-Donnxruntime_BUILD_SERVER=OFF', '-Donnxruntime_BUILD_x86=OFF', '-Donnxruntime_USE_FULL_PROTOBUF=ON', '-Donnxruntime_DISABLE_CONTRIB_OPS=OFF', '-Donnxruntime_MSVC_STATIC_RUNTIME=OFF', '-Donnxruntime_ENABLE_LANGUAGE_INTEROP_OPS=OFF', '-Donnxruntime_USE_DML=OFF', '-Donnxruntime_USE_TELEMETRY=OFF', '-DCUDA_CUDA_LIBRARY=/usr/local/cuda/lib64/stubs', '-Donnxruntime_PYBIND_EXPORT_OPSCHEMA=OFF', '-DCMAKE_BUILD_TYPE=Release']
LZ根據自己的需要,設置如下:
./build.sh --build_shared_lib --config Release --use_cuda --cudnn_home /usr/lib/x86_64-linux-gnu/ --cuda_home /usr/local/cuda --use_tensorrt --tensorrt_home your_path_to_tensorrt
在經歷了網絡問題,版本問題等一系列稀奇古怪的問題後,最終得到了最後的的100% tests passed!
在網上這個資源真的太少了,一趟趟的坑踩…其中還要感謝郭博的指導,努力follow中!
不說了,繼續coding去了! 到處都是知識盲點