在Ubuntu上安裝Tensorflow Serving

參考https://blog.csdn.net/u010175803/article/details/81333583

1.全局安裝grpcio

sudo pip3 install grpcio

2.安裝依賴庫

sudo apt-get update && sudo apt-get install -y automake build-essential curl libcurl3-dev git libtool libfreetype6-dev libpng12-dev libzmq3-dev pkg-config python3-dev python3-numpy python3-pip software-properties-common swig zip zlib1g-dev

3.安裝tensorflow-serving-api,

無需Bazel即可運行Python客戶端:

pip3 install tensorflow-serving-api

對於Serving,可以安裝二進制文件,也可以從源碼安裝。此處選擇前者。 
TensorFlow Serving ModelServer有兩個版本,即tensorflow-model-server和tensorflow-model-server-universal,其中前者針對SSE4和AVX之類的指令集進行了優化,但對老機器支持不好。前者不可行,即處理器不支持AVX指令集,則安裝後者。

# 把Serving的發行URI添加爲package源

# 把Serving的發行URI添加爲package源
echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -
# 安裝更新,之後即可通過tensorflow_model_server命令調用
sudo apt-get update && sudo apt-get install tensorflow-model-server

以後可以通過以下方式把ModelServer升級到指定版本:

sudo apt-get upgrade tensorflow-model-server

4.訓練部署模型

https://github.com/tensorflow/serving下載tensorflow serving的源碼,複製 serving/tensorflow_serving/example 文件夾到另一處,運行 mnist_saved_model.py 獲得測試用模型

python mnist_saved_model.py --training_iteration=1000 --model_version=1 ./test_model
# Copyright 2016 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================

# ! /usr/bin/env python
r"""Train and export a simple Softmax Regression TensorFlow model.

The model is from the TensorFlow "MNIST For ML Beginner" tutorial. This program
simply follows all its training instructions, and uses TensorFlow SavedModel to
export the trained model with proper signatures that can be loaded by standard
tensorflow_model_server.

Usage: mnist_saved_model.py [--training_iteration=x] [--model_version=y] \
    export_dir
"""

from __future__ import print_function

import os
import sys

# This is a placeholder for a Google-internal import.

import tensorflow as tf

import mnist_input_data

tf.app.flags.DEFINE_integer('training_iteration', 1000,
                            'number of training iterations.')
tf.app.flags.DEFINE_integer('model_version', 1, 'version number of the model.')
tf.app.flags.DEFINE_string('work_dir', './mnist_data', 'Working directory.')
FLAGS = tf.app.flags.FLAGS


def main(_):
    if len(sys.argv) < 2 or sys.argv[-1].startswith('-'):
        print('Usage: mnist_saved_model.py [--training_iteration=x] '
              '[--model_version=y] export_dir')
        sys.exit(-1)
    if FLAGS.training_iteration <= 0:
        print('Please specify a positive value for training iteration.')
        sys.exit(-1)
    if FLAGS.model_version <= 0:
        print('Please specify a positive value for version number.')
        sys.exit(-1)

    # Train model
    print('Training model...')
    mnist = mnist_input_data.read_data_sets(FLAGS.work_dir, one_hot=True)

    sess = tf.InteractiveSession()

    x = tf.placeholder('float', shape=[None, 784])
    y_ = tf.placeholder('float', shape=[None, 10])
    w = tf.Variable(tf.zeros([784, 10]))
    b = tf.Variable(tf.zeros([10]))
    sess.run(tf.global_variables_initializer())
    y = tf.nn.softmax(tf.matmul(x, w) + b, name='y')
    cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
    train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)


    for _ in range(FLAGS.training_iteration):
        batch = mnist.train.next_batch(50)
        train_step.run(feed_dict={x: batch[0], y_: batch[1]})
    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
    print('training accuracy %g' % sess.run(
        accuracy, feed_dict={
            x: mnist.test.images,
            y_: mnist.test.labels
        }))
    print('Done training!')

    # Export model
    # WARNING(break-tutorial-inline-code): The following code snippet is
    # in-lined in tutorials, please update tutorial documents accordingly
    # whenever code changes.

    export_path_base = sys.argv[-1]
    export_path = os.path.join(
        tf.compat.as_bytes(export_path_base),
        tf.compat.as_bytes(str(FLAGS.model_version)))
    print('Exporting trained model to', export_path)
    builder = tf.saved_model.builder.SavedModelBuilder(export_path)
  
    # 生成 輸入tensor x 和 輸出tensor y的 tensor info
    tensor_info_x = tf.saved_model.utils.build_tensor_info(x)
    tensor_info_y = tf.saved_model.utils.build_tensor_info(y)

    # 生成prediction_signature
    prediction_signature = (
        tf.saved_model.signature_def_utils.build_signature_def(
            # 客戶端request的時候 輸入的key 要與設置的相同
            inputs={'images': tensor_info_x},
            # 獲得的response 結構體中 通過 設置的key('score')字段獲得結果
            outputs={'scores': tensor_info_y},
            method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
    
    # 定義 meta
    builder.add_meta_graph_and_variables(
        sess, [tf.saved_model.tag_constants.SERVING],
        signature_def_map={
             # 用於客戶端request時 的 signature_name
            'predict_images': prediction_signature,
        },
        main_op=tf.tables_initializer(),
        # 向上兼容
        strip_default_attrs=True)

    builder.save()

    print('Done exporting!')


if __name__ == '__main__':
    tf.app.run()

 

最後一個參數爲訓練模型保存的地址

運行後 ./test_model 文件夾下目錄格式

test_model/
└── 1            
    ├── saved_model.pb
    └── variables
        ├── variables.data-00000-of-00001
        └── variables.index

1是模型的model_version,save_model.pb是一個保存了graph和權重的模型文件

將文件夾 1 複製到 Tensorflow Serving的工作目錄  /home/xxx/Serving/model/mnist

開啓Serving服務

tensorflow_model_server --port=9000 --model_name=mnist --model_base_path=/home/xxx/Serving/model/mnist

mode_name是模型的名字,調用請求時需要保證request的 ip地址,端口,model_name 與開啓的Serving服務一致。

當部署多個模型時,可以讓不同模型監聽不同端口。

5.調用客戶端

運行例子中的 mnist_client.py 

python mnist_client.py --num_tests=100 --server=localhost:9000
request = predict_pb2.PredictRequest()
# serving 開啓時定義的 model_name
request.model_spec.name = 'mnist'
# model 訓練時定義的 signature_def_map
request.model_spec.signature_name = 'predict_images'
image, label = test_data_set.next_batch(1)

request.inputs['images'].CopyFrom(
            tf.contrib.util.make_tensor_proto(image[0], shape=[1, image[0].size]))
result_counter.throttle()
result_future = stub.Predict.future(request, 5.0)  # 5 seconds
result_future.add_done_callback(_create_rpc_callback(label[0], result_counter))

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章