自己實現Op註冊到tensorflow-serving

目的是雲端算法中執行LSTM部分計算過程的加速,即用cu文件編譯出so,用此so中的LSTM類或函數替代tf.LSTMCell進行運算。
整個項目見Github,流程見博客,博主也剛入門cuda,歡迎留言探討~

使用自定義操作提供TensorFlow模型

TensorFlow預先構建了一個廣泛的操作庫和操作內核(實現),可針對不同的硬件類型(CPU,GPU等)進行微調。這些操作自動鏈接到TensorFlow Serving ModelServer二進制文件,無需用戶進行額外的工作。但是,有兩個用例需要用戶在ops中顯式鏈接到ModelServer:

  • 您已經編寫了自己的自定義操作(例如,使用 本指南
  • 您正在使用TensorFlow未附帶的已實現的操作系統

注意:從2.0版開始,TensorFlow不再分發contrib模塊; 如果您使用contrib ops提供TensorFlow程序,請使用本指南明確地將這些操作鏈接到ModelServer。

無論您是否實現了操作,爲了使用自定義操作來提供模型,您都需要訪問操作系統的源代碼。本指南將指導您完成使用源以使自定義操作可用於服務的步驟。有關自定義操作的實現的指導,請參閱 tensorflow

先決條件:已經編寫了自定義操作並註冊到tensorflow op。

將op源複製到Serving項目中

tensorflow_serving文件夾下創建以cuda_lstm_forward命名的文件夾

然後同時把"00_lstm.cu", “00_lstm.so” , “cuda_lstm_forward.h”, “cuda_lstm_forward.cc”,"cuda_lstm_forward.so,"即所有依賴項放到當前文件夾下

[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-VNiHKFHe-1571281039666)(/Users/zhipeng/Library/Application Support/typora-user-images/image-20190923101116942.png)]

爲op構建靜態庫

在cuda_lstm_forward的文件夾,您會看到一個生成共享對象文件(.so)的目標,您可以將其加載到python中以創建和訓練模型。但是,TensorFlow服務在構建時靜態鏈接操作,並且需要一個.a文件。因此,需要創建一個生成此文件的構建規則 tensorflow_serving/cuda_lstm_forward/BUILD

package(
    default_visibility = [
        "//tensorflow_serving:internal",
    ],
    features = ["-layering_check"],
)



cc_library(
    name = "cuda_lstm_forward.so",
    visibility = ["//visibility:public"],
    srcs = [
            "cuda_lstm_forward.cc",
            #"cuda_lstm_forward.h",
            "lib00_lstm.so",
            #"00_lstm.cu.cc"
           ] ,
    copts = ["-std=c++11"],
    deps = ["@org_tensorflow//tensorflow/core:framework_headers_lib",
            "@org_tensorflow//tensorflow/core/util/ctc",
            "@org_tensorflow//third_party/eigen3",
    ],
    alwayslink=1,
)

bazel BUILD規則參考: bazel C/C++ Rules

使用op鏈接構建ModelServer

要爲使用自定義操作的模型提供服務,您必須使用鏈接的操作構建ModelServer二進制文件。具體來說,您將cuda_lstm_forward上面創建的構建目標添加到ModelServer的BUILD文件中。

編輯tensorflow_serving/model_servers/BUILD以添加目標中SUPPORTED_TENSORFLOW_OPS包含的自定義op構建server_lib目標:

找到“SUPPORTED_TENSORFLOW_OPS”,做如下修改:

SUPPORTED_TENSORFLOW_OPS = [
    "@org_tensorflow//tensorflow/contrib:contrib_kernels",
    "@org_tensorflow//tensorflow/contrib:contrib_ops_op_lib",
    #Added this line
    #"//tensorflow_serving/fsmn_forward:fsmn_forward.so",
    "//tensorflow_serving/cuda_lstm_forward:cuda_lstm_forward.so"  
]

然後使用tensorflow_serving上層目錄下的build_tf.sh編譯ModelServer:

#FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04
export TF_CUDA_VERSION=8.0
export TF_CUDNN_VERSION=6
TF_SERVING_COMMIT=tags/1.5.0
#TF_COMMIT=tags/ais_v0.0.1
BAZEL_VERSION=0.15.0

export  TF_NEED_CUDA=1
export  TF_NEED_S3=1
export  TF_CUDA_COMPUTE_CAPABILITIES="3.5,5.2,6.1"
export  TF_NEED_GCP=1
export  TF_NEED_JEMALLOC=0
export  TF_NEED_HDFS=0
export  TF_NEED_OPENCL=0
#export  TF_NEED_MKL=1
export  TF_NEED_VERBS=0
export  TF_NEED_MPI=0
export  TF_NEED_GDR=0
export  TF_ENABLE_XLA=0
export  TF_CUDA_CLANG=0
export  TF_NEED_OPENCL_SYCL=0
export  CUDA_TOOLKIT_PATH=/usr/local/cuda
export  CUDNN_INSTALL_PATH=/usr/local/cuda
#export  MKL_INSTALL_PATH=/opt/intel/mkl
export  GCC_HOST_COMPILER_PATH=/usr/bin/gcc
export  PYTHON_BIN_PATH=/usr/bin/python
export  PYTHON_LIB_PATH=/usr/lib/python2.7/site-packages/
export  CC_OPT_FLAGS="-march=native"

if [ ! -d "./tensorflow" ]; then
        git clone https://gitlab.spetechcular.com/core/tensorflow.git
fi

if [ ! -d "./build_out" ]; then
    mkdir ./build_out
fi

#git checkout $TF_SERVING_COMMIT
cd ./tensorflow && \
#git checkout $TF_COMMIT
TF_SET_ANDROID_WORKSPACE= ./configure

cd ..
    #bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --config=cuda -k --verbose_failures  --crosstool_top=@local_config_cuda//crosstool:toolchain tensorflow_serving/model_servers:tensorflow_model_server

    #bazel build -c opt --copt=-mavx --copt=-mfpmath=both --copt=-msse4.2 --config=cuda -k --verbose_failures --crosstool_top=@local_config_cuda//crosstool:toolchain --spawn_strategy=standalone tensorflow_serving/model_servers:tensorflow_model_server
    bazel build -c opt --config=cuda -k --verbose_failures --crosstool_top=@local_config_cuda//crosstool:toolchain tensorflow_serving/model_servers:tensorflow_model_server

提供包含您的自定義操作的模型

  1. 把savemodel拷貝進/home/public/tfs_sever_gpu/tfs_models/cudalstm/ 重命名爲1,需要用命令行改mv savedmodel/ 1
  2. 把依賴項lib00_lstm.so拷貝到/home/public/tfs_sever_gpu/cuda_so/
  3. 把編譯好的/home/public/serving_gpu_15addopjiami/bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server拷貝到/home/public/tfs_sever_gpu/bin/下,拷貝時無權限用chmod 777 ,用ll查看更新時間確保拷貝成功
  4. ps -ef |grep tensorflow查看端口號
  5. 在/home/public/tfs_sever_gpu/下 sh run13.sh開服務器,不需要激活環境和本機無關
  6. 在/home/public/tfs_sever_gpu/model_transfer/下python lstmctc.py and python cudalstm.py,環境source activate /home/pz853/anaconda3/envs/py2,這裏的環境缺很多包grpc, tensorflow-serving-apiGPUpip第二個後少abs,並且無可逆注意下
  7. 多併發即同樣目錄下寫個shell腳本用 & 多次跑,測到1 2 4 8 16 32 64 128
#!/home/public/tfs_sever_gpu/run.sh

basepath=$(dirname $(readlink -f $0))

# ipcpath 按需修改
#ipcpath=unix:${basepath}/f
ipcpath=10.12.8.26:9001

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${basepath}/cuda_so

# del model.cfg if exist
if [ -f "${basepath}/cfg/model.cfg" ]; then
    rm "${basepath}/cfg/model.cfg"
fi

# prepare model.cfg file
model_cfg=${basepath}/cfg/model.cfg
touch "$model_cfg"
echo  "model_config_list: {" >> $model_cfg
for dir in `ls ${basepath}/tfs_models`
do
    if [ -d "${basepath}/tfs_models/${dir}" ]; then
        echo "  config: {" >> $model_cfg
        echo "    name: \"${dir}\"," >> $model_cfg
        echo "    ## must be absolute path" >> $model_cfg
        echo "    base_path: \"${basepath}/tfs_models/${dir}\"," >> $model_cfg
        echo "    model_platform: \"tensorflow\"" >> $model_cfg
        echo "    model_version_policy: {" >> $model_cfg
        echo "      all: {}" >> $model_cfg
        echo "    }" >> $model_cfg
        echo "  }" >> $model_cfg
    fi
done
echo "}" >> $model_cfg

if [ ! -f "$model_cfg" ]; then
    echo "error: $model_cfg not exist"
    exit
fi

platform_cfg=${basepath}/cfg/platform.cfg
if [ ! -f "$platform_cfg" ]; then
    echo "error: $platform_cfg not exist"
    exit
fi

tfs_bin=${basepath}/bin/tensorflow_model_server_addop

#cpuinfo=`cat /proc/cpuinfo |grep flags | sed -n '1p' |grep avx -c`
#if [ $cpuinfo -gt 0 ]; then
#    tfs_bin=${basepath}/bin/tensorflow_model_server_xla
#fi
#cpuinfo=`cat /proc/cpuinfo |grep flags | sed -n '1p' |grep avx2 -c`
#if [ $cpuinfo -gt 0 ]; then
#    tfs_bin=${basepath}/bin/tensorflow_model_server_sse
#fi
#echo $tfs_bin 

#tfs_bin=${basepath}/bin/tensorflow_model_server

while true;
do
    #CUDA_VISIBLE_DEVICES=0 exec ${tfs_bin}  --ipcpath=10.12.8.26:9002 --model_config_file=${model_cfg} --platform_config_file=${platform_cfg}
    CUDA_VISIBLE_DEVICES=0 exec ${tfs_bin}  --port=9001 --model_config_file=${model_cfg} --platform_config_file=${platform_cfg}
done

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章