LeNet模型爲例
由Caffe的lenet_deploy.prototxt
文件轉換得到
name: "LeNet"
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 1 dim: 1 dim: 28 dim: 28 } }
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 50
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "prob"
type: "Softmax"
bottom: "ip2"
top: "prob"
}
param文件:
7767517
9 9
Input data 0 1 data 0=28 1=28 2=1
Convolution conv1 1 1 data conv1 0=20 1=5 2=1 3=1 4=0 5=1 6=500
Pooling pool1 1 1 conv1 pool1 0=0 1=2 2=2 3=0 4=0
Convolution conv2 1 1 pool1 conv2 0=50 1=5 2=1 3=1 4=0 5=1 6=25000
Pooling pool2 1 1 conv2 pool2 0=0 1=2 2=2 3=0 4=0
InnerProduct ip1 1 1 pool2 ip1 0=500 1=1 2=400000
ReLU relu1 1 1 ip1 ip1_relu1
InnerProduct ip2 1 1 ip1_relu1 ip2 0=10 1=1 2=5000
Softmax prob 1 1 ip2 prob 0=0
第一行:版本信息
數值爲此param文件的版本
ncnn相關源碼說明:
int magic = 0;
fscanf(fp, "%d", &magic);
if (magic != 7767517)
{
fprintf(stderr, "param is too old, please regenerate\n");
return -1;
}
第二行:層與數據交換結構數量
第一個數字:層(layer)的數量
第二個數字:數據交換結構(blob)的數量
ncnn相關源碼說明:
// parse
int layer_count = 0;
int blob_count = 0;
fscanf(fp, "%d %d", &layer_count, &blob_count);
第三行及以下:相關層的具體信息
input層比較特殊一點
前4個值的含義固定:
(1)層類型
(2)層名稱
(3)輸入數據結構數量(bottom blob)
(4)輸出數據結構數量(top blob)
後面跟有三個不同類型的值,嚴格按照順序排序:
(1) 網絡輸入層名(一個層可能有多個輸入,則有多個網絡輸入層名)
(2) 網絡輸出層名(一個層可能有多個輸出,則有多個網絡輸出層名)
(3)特殊參數(可能沒有): 一種是k=v的類型;另一種是k=len,v1,v2,v3….(數組類型)。該層在ncnn中是存放到paramDict結構中,不同類型層,各種參數意義不一樣。
以第一個卷積層爲例
層類型:Convolution
層名稱:conv1
輸入數據結構數量:1
輸出數據結構數量(top blob):1
網絡輸入層名:data
網絡輸出層名:conv1
特殊參數1:0=20,num_output: 20
特殊參數2:1=5,kernel_size: 5
特殊參數3:2=1,stride: 1
特殊參數4:3=1
特殊參數5:4=0
特殊參數6:5=1
特殊參數7:6=500,該層的參數量,5*5*1*20=500