在win10 vs2015 顯卡compute capability7.5 Python3.5.2環境下配置caffe及基本使用(一)介紹瞭如何編譯生成caffe工程及python、matlab接口。下面介紹通過命令行方式使用caffe訓練預測mnist數據集、訓練預測cifar10數據集,訓練預測自己的數據集。
(1) 訓練mnist數據集
在主目錄下的examples/minst文件夾下放入minist數據集
可以用如下兩個bat文件來做mnist數據集的轉換
如下是轉換爲leveldb數據格式
如下是轉換爲lmdb格式
在主目錄下新建一個bat文件my_add_mnist_run_train.bat(名字可以隨意定義),輸入如下內容:
打開lenet_solver.prototxt,可以看到
# The train/test net protocol buffer definition
net: "examples/mnist/lenet_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100
# Carry out testing every 500 training iterations.
test_interval: 500
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.01
momentum: 0.9
weight_decay: 0.0005
# The learning rate policy
lr_policy: "inv"
gamma: 0.0001
power: 0.75
# Display every 100 iterations
display: 100
# The maximum number of iterations
max_iter: 10000
# snapshot intermediate results
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
# solver mode: CPU or GPU
solver_mode: GPU
裏面有學習率,迭代步數,gpu還是使用cpu跑等配置
再打開lenet_train_test.prototxt文件,可以看到
可以將prototxt文件放在以下的網址,查看網絡的結構:http://ethereon.github.io/netscope/#/editor
name: "LeNet"
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_train_lmdb"
batch_size: 64
backend: LMDB
}
}
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_test_lmdb"
batch_size: 100
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 50
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}
裏面有網絡結構、各層過濾器的大小,步長設置及數據集設置,有對應自己框架的數據結構
運行該訓練的bat文件,訓練完畢後結果如下:
(2)用訓練出來的模型測試mnist
首先計算mean.binaryproto
bat文件內容如下:
將lenet_train_test.prototxt複製一份出來,改名爲my_lenet_test.prototxt,取名隨意
對比lenet_train_test.prototxt,增加下面紅色標記處語句:
在主目錄下新建一個bat文件,文件裏內容爲:
運行該bat文件,結果如下:
創建一個bat文件,對單張mnist圖片進行預測,文件里語句如下:
test image文件夾中裏的內容如下:
result.txt中的內容如下(這裏是caffe預測的結果爲一個10維向量,這裏預測的圖片是2,會得到該向量中最大值所處的索引值,憑該索引去result中找對應的類別標籤):
該bat文件的執行結果如下:
(3)訓練cifar10
首先轉化cifar10爲caffe能夠支持的數據集格式,這裏轉爲lmdb,原來的數據集格式如下:
轉化的bat文件中的語句如下:
創建bat文件計算cifar10的mean.binaryproto文件
創建bat文件訓練網絡
cifar10_quick_solver.prototxt文件如下:
cifar10_quick_train_test.prototxt文件如下:
name: "CIFAR10_quick"
layer {
name: "cifar"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mean_file: "./examples/cifar10/cifar_outputdata/mean.binaryproto"
}
data_param {
source: "./examples/cifar10/cifar_outputdata/cifar10_train_lmdb"
batch_size: 100
backend: LMDB
}
}
layer {
name: "cifar"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mean_file: "./examples/cifar10/cifar_outputdata/mean.binaryproto"
}
data_param {
source: "./examples/cifar10/cifar_outputdata/cifar10_test_lmdb"
batch_size: 100
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 32
pad: 2
kernel_size: 5
stride: 1
weight_filler {
type: "gaussian"
std: 0.0001
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "pool1"
top: "pool1"
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 32
pad: 2
kernel_size: 5
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu2"
type: "ReLU"
bottom: "conv2"
top: "conv2"
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: AVE
kernel_size: 3
stride: 2
}
}
layer {
name: "conv3"
type: "Convolution"
bottom: "pool2"
top: "conv3"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 64
pad: 2
kernel_size: 5
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu3"
type: "ReLU"
bottom: "conv3"
top: "conv3"
}
layer {
name: "pool3"
type: "Pooling"
bottom: "conv3"
top: "pool3"
pooling_param {
pool: AVE
kernel_size: 3
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool3"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 64
weight_filler {
type: "gaussian"
std: 0.1
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 10
weight_filler {
type: "gaussian"
std: 0.1
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}
(4)命令行方式使用caffe預測一張cifar10圖片
如上mnist預測一樣,預測圖片如下:
同樣需要類別標籤名文件synset_words.txt,供caffe的結果去索引,該文件內容如下:
預測bat文件如下:
運行結果如下:
(5)訓練自己的數據集
在根目錄下data文件夾下新建mydata文件夾,需要訓練的數據集會存放在該文件夾內。
新建一個train和val文件夾,每個文件夾內如下:
cat文件夾內圖片如下:
dog文件夾內圖片如下:
同時在mydata文件夾內新建train.txt和val.txt,以記錄這些圖片的路徑
在主目錄下創建bat文件,該文件將jpg圖片轉化爲LMDB數據格式
新建bat文件,計算mean.binaryproto
這裏選擇利用cifar10中的cifar10_full_solver.prototxt、cifar10_full_train_test.prototxt,在其基礎上進行修改,主要是修改了batch size,因爲我這邊的數據集很小,此外輸出類別種類數由10改爲了2
新建bat文件,訓練網絡
solve.prototxt中的內容如下:
# reduce learning rate after 120 epochs (60000 iters) by factor 0f 10
# then another factor of 10 after 10 more epochs (5000 iters)
# The train/test net protocol buffer definition
net: "./data/mydata/train_val.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of CIFAR10, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 4
# Carry out testing every 1000 training iterations.
test_interval: 4
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.001
momentum: 0.9
weight_decay: 0.004
# The learning rate policy
lr_policy: "fixed"
# Display every 200 iterations
display: 4
# The maximum number of iterations
max_iter: 6000
# snapshot intermediate results
snapshot: 2000
snapshot_format: HDF5
snapshot_prefix: "./data/mydata/full"
# solver mode: CPU or GPU
solver_mode: GPU
train_val.prototxt中的內容如下:
name: "CIFAR10_full"
layer {
name: "cifar"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mean_file: "./data/mydata/mean.binaryproto"
}
data_param {
source: "./data/mydata/mtrain"
batch_size: 4
backend: LMDB
}
}
layer {
name: "cifar"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mean_file: "./data/mydata/mean.binaryproto"
}
data_param {
source: "./data/mydata/mval"
batch_size: 2
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 32
pad: 2
kernel_size: 5
stride: 1
weight_filler {
type: "gaussian"
std: 0.0001
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "pool1"
top: "pool1"
}
layer {
name: "norm1"
type: "LRN"
bottom: "pool1"
top: "norm1"
lrn_param {
local_size: 3
alpha: 5e-05
beta: 0.75
norm_region: WITHIN_CHANNEL
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "norm1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 32
pad: 2
kernel_size: 5
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu2"
type: "ReLU"
bottom: "conv2"
top: "conv2"
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: AVE
kernel_size: 3
stride: 2
}
}
layer {
name: "norm2"
type: "LRN"
bottom: "pool2"
top: "norm2"
lrn_param {
local_size: 3
alpha: 5e-05
beta: 0.75
norm_region: WITHIN_CHANNEL
}
}
layer {
name: "conv3"
type: "Convolution"
bottom: "norm2"
top: "conv3"
convolution_param {
num_output: 64
pad: 2
kernel_size: 5
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu3"
type: "ReLU"
bottom: "conv3"
top: "conv3"
}
layer {
name: "pool3"
type: "Pooling"
bottom: "conv3"
top: "pool3"
pooling_param {
pool: AVE
kernel_size: 3
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool3"
top: "ip1"
param {
lr_mult: 1
decay_mult: 250
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip1"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip1"
bottom: "label"
top: "loss"
}
運行bat文件,生成caffe.model文件
新建deploy.prototxt文件,該文件是由上面的train_val.prototxt修改而成
deploy.prototxt文件和train_val.prototxt文件不同的地方在於:
(1)輸入的數據不再是LMDB,也不分爲測試集和訓練集,輸入的類型爲Input,定義的維度,和訓練集的數據維度保持一致,121*121,否則會報錯;
(2)去掉weight_filler和bias_filler,這些參數已經存在於caffemodel中了,由caffemodel進行初始化。
(3)去掉最後的Accuracy層和loss層,換位Softmax層,表示分爲某一類的概率。
deploy.prototxt文件如下:
name: "CIFAR10_full"
layer {
name: "data"
type: "Input"
top: "data"
input_param{shape: {dim:1 dim:3 dim:121 dim:121}}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 32
pad: 2
kernel_size: 5
stride: 1
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "pool1"
top: "pool1"
}
layer {
name: "norm1"
type: "LRN"
bottom: "pool1"
top: "norm1"
lrn_param {
local_size: 3
alpha: 5e-05
beta: 0.75
norm_region: WITHIN_CHANNEL
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "norm1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 32
pad: 2
kernel_size: 5
stride: 1
}
}
layer {
name: "relu2"
type: "ReLU"
bottom: "conv2"
top: "conv2"
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: AVE
kernel_size: 3
stride: 2
}
}
layer {
name: "norm2"
type: "LRN"
bottom: "pool2"
top: "norm2"
lrn_param {
local_size: 3
alpha: 5e-05
beta: 0.75
norm_region: WITHIN_CHANNEL
}
}
layer {
name: "conv3"
type: "Convolution"
bottom: "norm2"
top: "conv3"
convolution_param {
num_output: 64
pad: 2
kernel_size: 5
stride: 1
}
}
layer {
name: "relu3"
type: "ReLU"
bottom: "conv3"
top: "conv3"
}
layer {
name: "pool3"
type: "Pooling"
bottom: "conv3"
top: "pool3"
pooling_param {
pool: AVE
kernel_size: 3
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool3"
top: "ip1"
param {
lr_mult: 1
decay_mult: 250
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "prob"
type: "Softmax"
bottom: "ip1"
top: "prob"
}
運行結果如下:
測試圖片爲
可知預測結果正確
這裏附上以上相關文件鏈接
主目錄下的examples\mnist文件夾內文件鏈接:https://pan.baidu.com/s/1H2tlBFEun-EPXSDn5uH8xg
提取碼:15xl
主目錄下的examples\cifar10文件夾內文件python、matlab調用)鏈接:https://pan.baidu.com/s/17Sy5mKtaFMEyB4_CEyAyrA
提取碼:5c1j
主目錄下的\data\mydata文件夾內的文件 鏈接:https://pan.baidu.com/s/1deI-ka1CoIvJdgx9cyEhSQ
提取碼:xgct
command line 文件 鏈接:https://pan.baidu.com/s/1kJONutsQlkN5KJ-n-lPmyg
提取碼:5snd
後續會有博客介紹如何使用matlab、python、c++接口訓練網絡和預測圖片