Caffe第一個例子 原

安裝好caffe之後,可以嘗試下caffe自帶的例子mnist。官方教程如下:

http://caffe.berkeleyvision.org/gathered/examples/mnist.html

 

首先運行如下命令,下載MNIST數據

cd $CAFFE_ROOT
./data/mnist/get_mnist.sh
./examples/mnist/create_mnist.sh

其中$CAFFE_ROOT代表安裝caffe的目錄。但是很可能無法下載,因爲要求安裝wget和gunzip,而windows下wget和gunzip需要另行安裝。因此可查看get_mnist.sh文件,手動下載如下四個文件

http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz

http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz

http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz

http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz

下載之後解壓到$CAFFE_ROOT/data/mnist/文件夾下。如下圖所示

此處CaffeBVLC\caffe目錄即爲$CAFFE_ROOT。

 

接下來將$CAFFE_ROOT\build\examples\mnist\Release\convert_mnist_data.exe複製到$CAFFE_ROOT\examples\mnist\目錄下,並執行如下指令

cd $CAFFE_ROOT\examples\mnist
convert_mnist_data.exe ..\..\data\mnist\train-images-idx3-ubyte \
  ..\..\data\mnist\train-labels-idx1-ubyte mnist_train_lmdb --backend=lmdb
convert_mnist_data.exe ..\..\data\mnist\t10k-images-idx3-ubyte \
  ..\..\data\mnist\t10k-labels-idx1-ubyte mnist_test_lmdb --backend=lmdb

注意執行過程中可能遇到缺少dll的報錯,把所有缺少的dll都從$CAFFE_ROOT目錄中找到,然後複製到$CAFFE_ROOT\examples\mnist目錄中即可,如下圖所示(python27.dll需從python安裝目錄處獲得;或者在Anaconda的python2.7終端中運行,則無需複製python27.dll。以下命令皆爲在Anaconda環境中執行)

執行上述指令後,該目錄下多了兩個目錄,mnist_test_lmdb和mnist_train_lmdb,如上圖所示。

 

將之前可以成功運行的caffe.exe所在目錄添加到PATH環境變量中,該目錄爲$CAFFE_ROOT\build\tools\Release,然後在$CAFFE_ROOT目錄下運行如下命令(此處可參考上一篇文章 《Windows下安裝TensorFlow和Caffe》

cd $CAFFE_ROOT
caffe.exe train --solver=examples\mnist\lenet_solver.prototxt

運行結果如下

上圖的輸出表明,caffe成功運行,對mnist數據集進行了訓練和測試。訓練模型保存在$CAFFE_ROOT\examples\mnist\lenet_iter_10000.caffemodel文件中,可部署在應用中進行識別。

 

下面簡單介紹一下caffe中的模型定義和配置文件。首先看看lenet_solver.prototxt,其內容如下

# The train/test net protocol buffer definition
net: "examples/mnist/lenet_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100
# Carry out testing every 500 training iterations.
test_interval: 500
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.01
momentum: 0.9
weight_decay: 0.0005
# The learning rate policy
lr_policy: "inv"
gamma: 0.0001
power: 0.75
# Display every 100 iterations
display: 100
# The maximum number of iterations
max_iter: 10000
# snapshot intermediate results
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
# solver mode: CPU or GPU
solver_mode: CPU

第一行定義了模型文件的位置,後面兩行定義了測試迭代次數,迭代100次,每500次訓練進行一次網絡測試。base_lr,momentum和weight_decay定義了網絡學習速率,動量和權重衰減速率。lr_policy,gamma和power定義了優化算法及參數,display定義了每100次迭代顯示一次結果,max_iter定義了最大迭代次數。snapshot和snapshot_prefix定義了每隔多少次訓練保存一次網絡參數,以及網絡參數保存的文件前綴(包括目錄)。solver_mode定義了CPU計算還是GPU計算。

 

然後看看lenet_train_test.prototxt

name: "LeNet"
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_test_lmdb"
    batch_size: 100
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}

 

首先是定義網絡名稱

name: "LeNet"

 

然後是把數據讀入網絡的數據層,其中一個數據層僅用於訓練(train),另一個僅用於測試(test)

layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_test_lmdb"
    batch_size: 100
    backend: LMDB
  }
}

該層名稱爲mnist,類型爲數據層,從給定的lmdb數據源中讀取數據。batch_size定義了一次讀入的數據量,0.00390625表示1/256,給出像素標度,使其值爲[0,1)之間。該層給出兩個輸出單元(top),一個是數據單元,一個是標籤單元。

 

然後是卷積層

layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}

該層以上一層輸出的數據單元作爲輸入,產生20個輸出通道,其卷積核大小爲5,stride爲1。fillers通過隨機方式初始化權重和偏差,對於權重fillers,採用xavier算法以自動根據輸入輸出神經元數量決定初值大小。對於bias fillers,簡單將其置爲常數0。lr_mults是可動態調整的學習速率,此處設置權重學習速率爲solver中設置的值,而常數學習速率則爲solver中的兩倍,這樣通常會帶來更好的收斂性。

 

然後是Pooling層

layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}

此處採用最大pooling,pool核爲2,stride爲2。類似地,在lenet_train_test.prototxt中可看到第二個卷積層和pooling層。

 

然後是全連接層

layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}

此處定義了一個500個輸出的全連接層。

 

然後是ReLU層

layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}

由於ReLU是元素級操作,可以直接修改元素以節約內存,這一點通過定義相同名稱的top和bottom來實現。在ReLU層之後,可以定義另一個全連接層,見lenet_train_test.prototxt。

 

最後是loss層,

layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
}

softmax_loss層實現了softmax和多維對數loss。其有兩個輸入,第一個是計算結果,第二個是數據層的標籤。該層並不產生任何輸出,僅計算loss函數的值,然後通過反向傳播算法初始化梯度。

 

注意lenet_train_test.prototxt中還有一個accuracy層,該層的作用和loss層相同,唯一的區別是accuracy僅在測試中用於計算精度。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章