【ncnn android】算法移植(七)——pytorch2onnx代碼粗看

目的:

  • 瞭解torch2onnx的流程
  • 瞭解其中的一些技術細節

1. 程序細節

  1. get_graph
    將pytorch的模型轉成onnx需要的graph
  • graph, torch_out = _trace_and_get_graph_from_model(model, args, training)

  • trace, torch_out, inputs_states = torch.jit.get_trace_graph(model, args, _force_outplace=True, _return_inputs_states=True) warn_on_static_input_change(inputs_states)

  1. graph_export_onnx
proto, export_map = graph._export_onnx(
                params_dict, opset_version, dynamic_axes, defer_weight_export,
                operator_export_type, strip_doc_string, val_keep_init_as_ip)

2. 其他

  1. batchnorm
    在保存成onnx的時候,設置verbose=True,可以看有哪些屬性。
%554 : Float(1, 16, 8, 8) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%550, %model.detect.context.inconv.conv.weight), scope: OnnxModel/DBFace[model]/DetectModule[detect]/ContextModule[context]/CBAModule[inconv]/Conv2d[conv] # /home/yangna/yangna/tool/anaconda2/envs/torch130/lib/python3.6/site-packages/torch/nn/modules/conv.py:342:0
  %555 : Float(1, 16, 8, 8) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%554, %model.detect.context.inconv.bn.weight, %model.detect.context.inconv.bn.bias, %model.detect.context.inconv.bn.running_mean, %model.detect.context.inconv.bn.running_var), scope: OnnxModel/DBFace[model]/DetectModule[detect]/ContextModule[context]/CBAModule[inconv]/BatchNorm2d[bn] # /home/yangna/yangna/tool/anaconda2/envs/torch130/lib/python3.6/site-packages/torch/nn/functional.py:1670:0
  %556 : Float(1, 16, 8, 8) = onnx::Relu(%555), scope: OnnxModel/DBFace[model]/DetectModule[detect]/ContextModule[context]/CBAModule[inconv]/ReLU[act] # /home/yangna/yangna/tool/anaconda2/envs/torch130/lib/python3.6/site-packages/torch/nn/functional.py:912:0
  %557 : Float(1, 16, 8, 8) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%556, %model.detect.context.upconv.conv.weight), scope: OnnxModel/DBFace[model]/DetectModule[detect]/ContextModule[context]/CBAModule[upconv]/Conv2d[conv] # /home/yangna/yangna/tool/anaconda2/envs/torch130/lib/python3.6/site-packages/torch/nn/modules/conv.py:342:0
  %558 : Float(1, 16, 8, 8) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%557, %model.detect.context.upconv.bn.weight, %model.detect.context.upconv.bn.bias, %model.detect.context.upconv.bn.running_mean, %model.detect.context.upconv.bn.running_var), scope: OnnxModel/DBFace[model]/DetectModule[detect]/ContextModule[context]/CBAModule[upconv]/BatchNorm2d[bn] # /home/yangna/yangna/tool/anaconda2/envs/torch130/lib/python3.6/site-packages/torch/nn/functional.py:1670:0
  %559 : Float(1, 16, 8, 8) = onnx::Relu(%558), scope: OnnxModel/DBFace[model]/DetectModule[detect]/ContextModule[context]/CBAModule[upconv]/ReLU[act] # /home/yangna/yangna/tool/anaconda2/envs/torch130/lib/python3.6/site-packages/torch/nn/functional.py:912:0

這裏以batchnorm爲例,說明一下:

  • 首先是pytorch中的:
    %558 : Float(1, 16, 8, 8) = onnx::BatchNormalization[epsilon=1e-05, momentum=0.9](%557, %model.detect.context.upconv.bn.weight, %model.detect.context.upconv.bn.bias, %model.detect.context.upconv.bn.running_mean, %model.detect.context.upconv.bn.running_var), scope: OnnxModel/DBFace[model]/DetectModule[detect]/ContextModule[context]/CBAModule[upconv]/BatchNorm2d[bn] # /home/yangna/yangna/tool/anaconda2/envs/torch130/lib/python3.6/site-packages/torch/nn/functional.py:1670:0
    其中小括號中就是要保存的參數的屬性有:bn.weight bn.bias bn.running_mean bn.running_var

  • ncnn中onnx2ncnn中如何讀取預訓練權重。

const onnx::TensorProto& scale = weights[node.input(1)];
const onnx::TensorProto& B = weights[node.input(2)];
const onnx::TensorProto& mean = weights[node.input(3)];
const onnx::TensorProto& var = weights[node.input(4)];
  • node.input(1):bn.weight
  • node.input(2):bn.bias
  • node.input(3):bn.running_mean
  • node.input(4):bn.running_var
    順序和pytorch2onnx寫入的順序一致
  1. maxpool
  • pytorch的打印信息
%pool_hm : Float(1, 1, 8, 8) = onnx::MaxPool[ceil_mode=0, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%hm), scope: OnnxModel # /home/yangna/yangna/tool/anaconda2/envs/torch130/lib/python3.6/site-packages/torch/nn/functional.py:488:0
  • ncnn中如何讀取結構參數
    因爲maxpool層是沒有預訓練權重的,只有一些結構參數
std::string auto_pad = get_node_attr_s(node, "auto_pad");//TODO
std::vector<int> kernel_shape = get_node_attr_ai(node, "kernel_shape");
std::vector<int> strides = get_node_attr_ai(node, "strides");
std::vector<int> pads = get_node_attr_ai(node, "pads");
  • 注意:這裏“auto_pad”字段和pytorch中的“ceil_model”字段是不一樣的。這是因爲pytorch2onnx版本和ncnn版本不對應造成的。可能ncnn20180704版時,maxpool的onnx表達中有“auto_pad”屬性。
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章