docker上部署模型對圖像進行預測

使用docker和tensorflow/serving部署深度學習模型,參考https://blog.csdn.net/qq_35565669/article/details/106903787

實踐的軟件環境是Windows10+Anaconda3+TensorFlow1.13.1+Keras2.3.1

當裏面只是簡單地接受numpy數組,返回計算結果時,我們可以直接

query_data = '{"instances": [[1.0, 2.0, 3.0]}'
requests.post(url, query_data)

但是當輸入是圖片數據時,就不能這樣操作了,因爲模型predict接口接收的參數是float類型的,而不是string類型。具體看模型的輸入、輸出參數,可以使用saved_model_cli命令,代碼如下。

(gpu) E:\tmp\tfserving\ur_seg\00000123>saved_model_cli show --dir . --all

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['input'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, -1, -1, 3)
        name: data:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['softmax/truediv:0'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, -1, -1, 5)
        name: softmax/truediv:0
  Method name is: tensorflow/serving/predict

可以看到模型的簽名是'serving default',輸入層名稱爲'input',接收數據類型是DT_FLOAT的tensor,輸出層的名稱爲'softmax/truediv:0',輸出數據類型是DT_FLOAT的tensor。

啓動容器,鏡像裏的gRPC端口是8500,這裏是將source裏本地的模型,掛載到target裏的容器路徑下面,並開始運行tfserving容器。

docker run -p 8500:8500 --name="ur_seg" --mount type=bind,source=E:/tmp/tfserving/ur_seg,target=/models/ur_seg -e MODEL_NAME=ur_seg -t tensorflow/serving "&"

 這樣,我們就需要輸入整張圖片的數據了。這裏需要用到gRPC服務,需要先安裝tensorflow-serving-api,注意這裏要和tensorflow的版本一致,否則會出現兼容性問題,而且要使用pip安裝。

pip install tensorflow-serving-api==1.13.1

請求服務的代碼如下。

import keras.backend as K
import tensorflow as tf
import functions
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
import grpc
import numpy as np
import os
import cv2 as cv

def request_server(img, server_url):
    """
    用於向TensorFlow Serving服務請求推理結果的函數。
    :param img: 經過預處理的待推理圖片數組,numpy array,shape:(h, w, 3)
    :param server_url: TensorFlow Serving的地址加端口,str,如:'127.0.0.1:8500'
    :return: 模型返回的結果數組,numpy array
    """
    channel = grpc.insecure_channel(server_url)
    stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
    request = predict_pb2.PredictRequest()
    request.model_spec.name = "ur_seg"  # 容器裏的模型名稱
    request.model_spec.signature_name = "serving_default"
    # "input"是模型輸入層的名稱
    request.inputs["input"].CopyFrom(
        tf.compat.v1.make_tensor_proto(img, shape=[1, ] + list(img.shape), dtype=tf.float32)) # 注意這裏設置dtype爲float類型
    response = stub.Predict(request, 5.0)
    print(response)
    return np.asarray(response.outputs["softmax/truediv:0"].float_val)

if __name__ == '__main__':

    img_path = r'E:\imgs\2.png'
    img = cv.imread(img_path)
    img = img.astype(np.float)
    url = r'127.0.0.1:8500'
    pred = request_server(img, url)
    preds = functions.prediction_to_image(pred)
    cv.imshow('', preds)
    cv.waitKey(0)

 參考鏈接裏給的容器端口的地址都是0.0.0.0:8500,可是這個地址是unavailable的

File "E:/Brian/videoProcess/us_analysis/code/deploy.py", line 63, in request_server
    response = stub.Predict(request, 10.0)
  File "C:\Users\csai\Anaconda2\envs\gpu\lib\site-packages\grpc\_channel.py", line 826, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "C:\Users\csai\Anaconda2\envs\gpu\lib\site-packages\grpc\_channel.py", line 729, in _end_unary_response_blocking
    raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
 status = StatusCode.UNAVAILABLE
 details = "failed to connect to all addresses"
 debug_error_string = "{"created":"@1593227968.373000000","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3941,"referenced_errors":[{"created":"@1593227968.373000000","description":"failed to connect to all addresses","file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":393,"grpc_status":14}]}"

 查了一堆信息,也沒有解決,我換成了127.0.0.1:8500,纔得到了正確結果(真的坑!)

查看了一下hosts文件

在服務器中,0.0.0.0並不是一個真實的的IP地址,它表示本機中所有的IPV4地址。監聽0.0.0.0的端口,就是監聽本機中所有IP的端口。 

localhost其實是域名,一般windows系統默認將localhost指向127.0.0.1,但是localhost並不等於127.0.0.1,localhost指向的IP地址是可以配置的。

 

參考:

http://dockone.io/article/9209

https://blog.csdn.net/weixin_34343000/article/details/88118667?utm_medium=distribute.pc_relevant.none-task-blog-baidujs-6

https://zhuanlan.zhihu.com/p/96917543

https://zhuanlan.zhihu.com/p/52096200

https://blog.csdn.net/liyi1009365545/article/details/84780476?utm_medium=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromMachineLearnPai2-1.nonecase&depth_1-utm_source=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromMachineLearnPai2-1.nonecase

https://blog.csdn.net/JerryZhang__/article/details/85107506?utm_medium=distribute.pc_relevant.none-task-blog-baidujs-1

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章