Caffe中Mnist的訓練日誌解讀

Lenet的網絡結構圖如下:


Mnist的訓練日誌如下(其中max_iter設爲3,使用CPU模式)

I0223 14:55:42.870805  6406 caffe.cpp:178] Use CPU.//使用CPU模式

I0223 14:55:42.871371  6406 solver.cpp:48] Initializing solver from parameters: //利用solver.prototxt文件,初始化solver參數

test_iter: 100

test_interval: 500

base_lr: 0.01

display: 100

max_iter: 3

lr_policy: "inv"

gamma: 0.0001

power: 0.75

momentum: 0.9

weight_decay: 0.0005

snapshot: 5000

snapshot_prefix: "examples/mnist/lenet"

solver_mode: CPU

net: "examples/mnist/lenet_train_test.prototxt"

I0223 14:55:42.871707  6406 solver.cpp:91] Creating training net from net file: examples/mnist/lenet_train_test.prototxt

I0223 14:55:42.872799  6406 net.cpp:322] The NetState phase (0) differed from the phase (1) specified by a rule in layer mnist

I0223 14:55:42.872871  6406 net.cpp:322] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy

I0223 14:55:42.873092  6406 net.cpp:49] Initializing net from parameters: //初始化網絡參數

name: "LeNet"

state {

  phase: TRAIN

}

layer {

  name: "mnist"

  type: "Data"

  top: "data"

  top: "label"

  include {

    phase: TRAIN

  }

  transform_param {

    scale: 0.00390625

  }

  data_param {

    source: "examples/mnist/mnist_train_lmdb"

    batch_size: 64

    backend: LMDB

  }

}

layer {

  name: "conv1"

  type: "Convolution"

  bottom: "data"

  top: "conv1"

  param {

    lr_mult: 1

  }

  param {

    lr_mult: 2

  }

  convolution_param {

    num_output: 20

    kernel_size: 5

    stride: 1

    weight_filler {

      type: "xavier"

    }

    bias_filler {

      type: "constant"

    }

  }

}

layer {

  name: "pool1"

  type: "Pooling"

  bottom: "conv1"

  top: "pool1"

  pooling_param {

    pool: MAX

    kernel_size: 2

    stride: 2

  }

}

layer {

  name: "conv2"

  type: "Convolution"

  bottom: "pool1"

  top: "conv2"

  param {

    lr_mult: 1

  }

  param {

    lr_mult: 2

  }

  convolution_param {

    num_output: 50

    kernel_size: 5

    stride: 1

    weight_filler {

      type: "xavier"

    }

    bias_filler {

      type: "constant"

    }

  }

}

layer {

  name: "pool2"

  type: "Pooling"

  bottom: "conv2"

  top: "pool2"

  pooling_param {

    pool: MAX

    kernel_size: 2

    stride: 2

  }

}

layer {

  name: "ip1"

  type: "InnerProduct"

  bottom: "pool2"

  top: "ip1"

  param {

    lr_mult: 1

  }

  param {

    lr_mult: 2

  }

  inner_product_param {

    num_output: 500

    weight_filler {

      type: "xavier"

    }

    bias_filler {

      type: "constant"

    }

  }

}

layer {

  name: "relu1"

  type: "ReLU"

  bottom: "ip1"

  top: "ip1"

}

layer {

  name: "ip2"

  type: "InnerProduct"

  bottom: "ip1"

  top: "ip2"

  param {

    lr_mult: 1

  }

  param {

    lr_mult: 2

  }

  inner_product_param {

    num_output: 10

    weight_filler {

      type: "xavier"

    }

    bias_filler {

      type: "constant"

    }

  }

}

layer {

  name: "loss"

  type: "SoftmaxWithLoss"

  bottom: "ip2"

  bottom: "label"

  top: "loss"

}

//構造Lenet網絡結構

I0223 14:55:42.873805  6406 layer_factory.hpp:77] Creating layer mnist//構建mnist層,也就是數據層

I0223 14:55:42.875214  6406 net.cpp:106] Creating Layer mnist

I0223 14:55:42.875300  6406 net.cpp:411] mnist -> data

I0223 14:55:42.875455  6409 db_lmdb.cpp:38] Opened lmdb examples/mnist/mnist_train_lmdb//使用lmdb格式的訓練數據

I0223 14:55:42.875653  6406 net.cpp:411] mnist -> label

//數據的輸出大小爲64*1*28*28(64:圖像數,此處爲batch大小 1:通道數 28:圖像的高 28:圖像的寬),數據層只有輸出沒有輸入

I0223 14:55:42.875879  6406 data_layer.cpp:41] output data size: 64,1,28,28

I0223 14:55:42.876302  6406 base_data_layer.cpp:69] Initializing prefetch//初始化prefetch,該函數可以提前獲取下一個batch,提高效率

I0223 14:55:42.876423  6406 base_data_layer.cpp:72] Prefetch initialized.

I0223 14:55:42.876461  6406 net.cpp:150] Setting up mnist

I0223 14:55:42.876531  6406 net.cpp:157] Top shape: 64 1 28 28 (50176)//50176=64*1*28*28

I0223 14:55:42.876636  6406 net.cpp:157] Top shape: 64 (64)

I0223 14:55:42.876716  6406 net.cpp:165] Memory required for data: 200960//此時所需要的內存大小

I0223 14:55:42.876829  6406 layer_factory.hpp:77] Creating layer conv1//構建卷積層conv1

I0223 14:55:42.877009  6406 net.cpp:106] Creating Layer conv1

I0223 14:55:42.877122  6406 net.cpp:454] conv1 <- data//數據層的top爲卷積層conv1(即數據層data的後面一層爲conv1)

I0223 14:55:42.877312  6406 net.cpp:411] conv1 -> conv1

I0223 14:55:42.877905  6406 net.cpp:150] Setting up conv1

//conv1的輸出數據大小(20爲通道數,也是卷積核的個數),卷積核大小爲5,步長爲1,經過卷積後圖像大小已由28*28變爲24*24

I0223 14:55:42.877964  6406 net.cpp:157] Top shape: 64 20 24 24 (737280)

I0223 14:55:42.877993  6406 net.cpp:165] Memory required for data: 3150080

I0223 14:55:42.878154  6406 layer_factory.hpp:77] Creating layer pool1//構建池化層pool1(使用的是最大池化方式)

I0223 14:55:42.878242  6406 net.cpp:106] Creating Layer pool1

I0223 14:55:42.878325  6406 net.cpp:454] pool1 <- conv1//卷積層conv1的top爲池化層pool1

I0223 14:55:42.878412  6406 net.cpp:411] pool1 -> pool1

I0223 14:55:42.878540  6406 net.cpp:150] Setting up pool1

I0223 14:55:42.878597  6406 net.cpp:157] Top shape: 64 20 12 12 (184320)//經池化後,圖像大小已由24*24變成12*12,池化大小爲2*2

I0223 14:55:42.878628  6406 net.cpp:165] Memory required for data: 3887360

I0223 14:55:42.878662  6406 layer_factory.hpp:77] Creating layer conv2//構建卷積層conv2

I0223 14:55:42.878742  6406 net.cpp:106] Creating Layer conv2

I0223 14:55:42.878779  6406 net.cpp:454] conv2 <- pool1//池化層pool1的top爲卷積層conv2

I0223 14:55:42.878862  6406 net.cpp:411] conv2 -> conv2

I0223 14:55:42.882365  6410 data_layer.cpp:102] Prefetch batch: 5 ms.//提前獲取下一個batch

I0223 14:55:42.882421  6410 data_layer.cpp:103]      Read time: 0.791 ms.//batch的讀取時間

I0223 14:55:42.882447  6410 data_layer.cpp:104] Transform time: 3.296 ms.//batch的轉化時間

I0223 14:55:42.883898  6406 net.cpp:150] Setting up conv2//配置卷積層conv2

//卷積層conv2的輸出大小由64*20*12*12變爲64*50*8*8,卷積核數目爲50,卷積核大小爲5,步長爲1

I0223 14:55:42.883935  6406 net.cpp:157] Top shape: 64 50 8 8 (204800)

I0223 14:55:42.883962  6406 net.cpp:165] Memory required for data: 4706560

I0223 14:55:42.884052  6406 layer_factory.hpp:77] Creating layer pool2//構建池化層pool2

I0223 14:55:42.884110  6406 net.cpp:106] Creating Layer pool2

I0223 14:55:42.884181  6406 net.cpp:454] pool2 <- conv2//卷積層conv2的top爲池化層pool2

I0223 14:55:42.884268  6406 net.cpp:411] pool2 -> pool2

I0223 14:55:42.884383  6406 net.cpp:150] Setting up pool2

I0223 14:55:42.884448  6406 net.cpp:157] Top shape: 64 50 4 4 (51200)//經池化後,圖像大小已由8*8變成4*4,池化大小爲2*2

I0223 14:55:42.884500  6406 net.cpp:165] Memory required for data: 4911360

I0223 14:55:42.884553  6406 layer_factory.hpp:77] Creating layer ip1//構建內積層ip1(InnerProduct)

I0223 14:55:42.884644  6406 net.cpp:106] Creating Layer ip1

I0223 14:55:42.884706  6406 net.cpp:454] ip1 <- pool2//池化層pool2的top爲內積層ip1

I0223 14:55:42.884790  6406 net.cpp:411] ip1 -> ip1

I0223 14:55:42.887511  6410 data_layer.cpp:102] Prefetch batch: 4 ms.

I0223 14:55:42.887596  6410 data_layer.cpp:103]      Read time: 0.887 ms.

I0223 14:55:42.887634  6410 data_layer.cpp:104] Transform time: 2.494 ms.

I0223 14:55:42.892683  6410 data_layer.cpp:102] Prefetch batch: 4 ms.

I0223 14:55:42.892719  6410 data_layer.cpp:103]      Read time: 0.536 ms.

I0223 14:55:42.892736  6410 data_layer.cpp:104] Transform time: 3.193 ms.

I0223 14:55:42.929069  6406 net.cpp:150] Setting up ip1//配置內積層ip1

I0223 14:55:42.929119  6406 net.cpp:157] Top shape: 64 500 (32000)//數據的輸出大小爲64*500

I0223 14:55:42.929129  6406 net.cpp:165] Memory required for data: 5039360

I0223 14:55:42.929193  6406 layer_factory.hpp:77] Creating layer relu1//構建非線性變換層relu1

I0223 14:55:42.929252  6406 net.cpp:106] Creating Layer relu1

I0223 14:55:42.929280  6406 net.cpp:454] relu1 <- ip1//內積層ip1的數據傳送至非線性變換層relu1

I0223 14:55:42.929325  6406 net.cpp:397] relu1 -> ip1 (in-place)//數據經過非線性變換後傳回內積層ip1

I0223 14:55:42.929373  6406 net.cpp:150] Setting up relu1

I0223 14:55:42.929394  6406 net.cpp:157] Top shape: 64 500 (32000)//數據的輸出大小爲64*500

I0223 14:55:42.929409  6406 net.cpp:165] Memory required for data: 5167360

I0223 14:55:42.929425  6406 layer_factory.hpp:77] Creating layer ip2//構建內積層ip2

I0223 14:55:42.929464  6406 net.cpp:106] Creating Layer ip2

I0223 14:55:42.929483  6406 net.cpp:454] ip2 <- ip1//內積層ip1的top爲ip2

I0223 14:55:42.929522  6406 net.cpp:411] ip2 -> ip2

I0223 14:55:42.930277  6406 net.cpp:150] Setting up ip2

I0223 14:55:42.930307  6406 net.cpp:157] Top shape: 64 10 (640)//數據的輸出大小爲64*10

I0223 14:55:42.930325  6406 net.cpp:165] Memory required for data: 5169920

I0223 14:55:42.930368  6406 layer_factory.hpp:77] Creating layer loss

I0223 14:55:42.930424  6406 net.cpp:106] Creating Layer loss//構建loss層loss

I0223 14:55:42.930449  6406 net.cpp:454] loss <- ip2//內積層ip2的top爲loss層

I0223 14:55:42.930486  6406 net.cpp:454] loss <- label//數據標籤層label的top爲loss層

I0223 14:55:42.930532  6406 net.cpp:411] loss -> loss//利用Softmax計算loss

I0223 14:55:42.930603  6406 layer_factory.hpp:77] Creating layer loss

I0223 14:55:42.930709  6406 net.cpp:150] Setting up loss

I0223 14:55:42.930735  6406 net.cpp:157] Top shape: (1)//loss層的數據輸出大小爲1

I0223 14:55:42.930747  6406 net.cpp:160]     with loss weight 1//loss的權重爲1

I0223 14:55:42.930768  6406 net.cpp:165] Memory required for data: 5169924

I0223 14:55:42.930788  6406 net.cpp:226] loss needs backward computation.//進行反向傳播梯度

I0223 14:55:42.930810  6406 net.cpp:226] ip2 needs backward computation.

I0223 14:55:42.930827  6406 net.cpp:226] relu1 needs backward computation.

I0223 14:55:42.930843  6406 net.cpp:226] ip1 needs backward computation.

I0223 14:55:42.930887  6406 net.cpp:226] pool2 needs backward computation.

I0223 14:55:42.930909  6406 net.cpp:226] conv2 needs backward computation.

I0223 14:55:42.930930  6406 net.cpp:226] pool1 needs backward computation.

I0223 14:55:42.930951  6406 net.cpp:226] conv1 needs backward computation.

I0223 14:55:42.930973  6406 net.cpp:228] mnist does not need backward computation.

I0223 14:55:42.930984  6406 net.cpp:270] This network produces output loss

I0223 14:55:42.931020  6406 net.cpp:283] Network initialization done.

//接下來是進行測試網絡的構建與測試過程,總共100次迭代過程,每次迭代的batch大小爲1000,故測試集的大小爲100*100=10000

I0223 14:55:42.931638  6406 solver.cpp:181] Creating test net (#0) specified by net file: examples/mnist/lenet_train_test.prototxt

I0223 14:55:42.931759  6406 net.cpp:322] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnist

I0223 14:55:42.931941  6406 net.cpp:49] Initializing net from parameters: 

name: "LeNet"

state {

  phase: TEST

}

layer {

  name: "mnist"

  type: "Data"

  top: "data"

  top: "label"

  include {

    phase: TEST

  }

  transform_param {

    scale: 0.00390625

  }

  data_param {

    source: "examples/mnist/mnist_test_lmdb"

    batch_size: 100

    backend: LMDB

  }

}

layer {

  name: "conv1"

  type: "Convolution"

  bottom: "data"

  top: "conv1"

  param {

    lr_mult: 1

  }

  param {

    lr_mult: 2

  }

  convolution_param {

    num_output: 20

    kernel_size: 5

    stride: 1

    weight_filler {

      type: "xavier"

    }

    bias_filler {

      type: "constant"

    }

  }

}

layer {

  name: "pool1"

  type: "Pooling"

  bottom: "conv1"

  top: "pool1"

  pooling_param {

    pool: MAX

    kernel_size: 2

    stride: 2

  }

}

layer {

  name: "conv2"

  type: "Convolution"

  bottom: "pool1"

  top: "conv2"

  param {

    lr_mult: 1

  }

  param {

    lr_mult: 2

  }

  convolution_param {

    num_output: 50

    kernel_size: 5

    stride: 1

    weight_filler {

      type: "xavier"

    }

    bias_filler {

      type: "constant"

    }

  }

}

layer {

  name: "pool2"

  type: "Pooling"

  bottom: "conv2"

  top: "pool2"

  pooling_param {

    pool: MAX

    kernel_size: 2

    stride: 2

  }

}

layer {

  name: "ip1"

  type: "InnerProduct"

  bottom: "pool2"

  top: "ip1"

  param {

    lr_mult: 1

  }

  param {

    lr_mult: 2

  }

  inner_product_param {

    num_output: 500

    weight_filler {

      type: "xavier"

    }

    bias_filler {

      type: "constant"

    }

  }

}

layer {

  name: "relu1"

  type: "ReLU"

  bottom: "ip1"

  top: "ip1"

}

layer {

  name: "ip2"

  type: "InnerProduct"

  bottom: "ip1"

  top: "ip2"

  param {

    lr_mult: 1

  }

  param {

    lr_mult: 2

  }

  inner_product_param {

    num_output: 10

    weight_filler {

      type: "xavier"

    }

    bias_filler {

      type: "constant"

    }

  }

}

layer {

  name: "accuracy"

  type: "Accuracy"

  bottom: "ip2"

  bottom: "label"

  top: "accuracy"

  include {

    phase: TEST

  }

}

layer {

  name: "loss"

  type: "SoftmaxWithLoss"

  bottom: "ip2"

  bottom: "label"

  top: "loss"

}

I0223 14:55:42.932307  6406 layer_factory.hpp:77] Creating layer mnist

I0223 14:55:42.934370  6406 net.cpp:106] Creating Layer mnist

I0223 14:55:42.934464  6406 net.cpp:411] mnist -> data

I0223 14:55:42.934545  6411 db_lmdb.cpp:38] Opened lmdb examples/mnist/mnist_test_lmdb

I0223 14:55:42.934552  6406 net.cpp:411] mnist -> label

I0223 14:55:42.934733  6406 data_layer.cpp:41] output data size: 100,1,28,28

I0223 14:55:42.935247  6406 base_data_layer.cpp:69] Initializing prefetch

I0223 14:55:42.935431  6406 base_data_layer.cpp:72] Prefetch initialized.

I0223 14:55:42.935503  6406 net.cpp:150] Setting up mnist

I0223 14:55:42.935598  6406 net.cpp:157] Top shape: 100 1 28 28 (78400)

I0223 14:55:42.935681  6406 net.cpp:157] Top shape: 100 (100)

I0223 14:55:42.935739  6406 net.cpp:165] Memory required for data: 314000

I0223 14:55:42.935811  6406 layer_factory.hpp:77] Creating layer label_mnist_1_split

I0223 14:55:42.935909  6406 net.cpp:106] Creating Layer label_mnist_1_split

I0223 14:55:42.935940  6406 net.cpp:454] label_mnist_1_split <- label

I0223 14:55:42.935992  6406 net.cpp:411] label_mnist_1_split -> label_mnist_1_split_0

I0223 14:55:42.936048  6406 net.cpp:411] label_mnist_1_split -> label_mnist_1_split_1

I0223 14:55:42.936138  6406 net.cpp:150] Setting up label_mnist_1_split

I0223 14:55:42.936182  6406 net.cpp:157] Top shape: 100 (100)

I0223 14:55:42.936202  6406 net.cpp:157] Top shape: 100 (100)

I0223 14:55:42.936223  6406 net.cpp:165] Memory required for data: 314800

I0223 14:55:42.936246  6406 layer_factory.hpp:77] Creating layer conv1

I0223 14:55:42.936327  6406 net.cpp:106] Creating Layer conv1

I0223 14:55:42.936350  6406 net.cpp:454] conv1 <- data

I0223 14:55:42.936396  6406 net.cpp:411] conv1 -> conv1

I0223 14:55:42.936653  6406 net.cpp:150] Setting up conv1

I0223 14:55:42.936681  6406 net.cpp:157] Top shape: 100 20 24 24 (1152000)

I0223 14:55:42.936693  6406 net.cpp:165] Memory required for data: 4922800

I0223 14:55:42.936758  6406 layer_factory.hpp:77] Creating layer pool1

I0223 14:55:42.936802  6406 net.cpp:106] Creating Layer pool1

I0223 14:55:42.936823  6406 net.cpp:454] pool1 <- conv1

I0223 14:55:42.936872  6406 net.cpp:411] pool1 -> pool1

I0223 14:55:42.936938  6406 net.cpp:150] Setting up pool1

I0223 14:55:42.936962  6406 net.cpp:157] Top shape: 100 20 12 12 (288000)

I0223 14:55:42.936975  6406 net.cpp:165] Memory required for data: 6074800

I0223 14:55:42.936990  6406 layer_factory.hpp:77] Creating layer conv2

I0223 14:55:42.937044  6406 net.cpp:106] Creating Layer conv2

I0223 14:55:42.937064  6406 net.cpp:454] conv2 <- pool1

I0223 14:55:42.937110  6406 net.cpp:411] conv2 -> conv2

I0223 14:55:42.940639  6406 net.cpp:150] Setting up conv2

I0223 14:55:42.940696  6406 net.cpp:157] Top shape: 100 50 8 8 (320000)

I0223 14:55:42.940711  6406 net.cpp:165] Memory required for data: 7354800

I0223 14:55:42.940803  6406 layer_factory.hpp:77] Creating layer pool2

I0223 14:55:42.940876  6406 net.cpp:106] Creating Layer pool2

I0223 14:55:42.940910  6406 net.cpp:454] pool2 <- conv2

I0223 14:55:42.940970  6406 net.cpp:411] pool2 -> pool2

I0223 14:55:42.941061  6406 net.cpp:150] Setting up pool2

I0223 14:55:42.941093  6406 net.cpp:157] Top shape: 100 50 4 4 (80000)

I0223 14:55:42.941108  6406 net.cpp:165] Memory required for data: 7674800

I0223 14:55:42.941131  6406 layer_factory.hpp:77] Creating layer ip1

I0223 14:55:42.941184  6406 net.cpp:106] Creating Layer ip1

I0223 14:55:42.941210  6406 net.cpp:454] ip1 <- pool2

I0223 14:55:42.941262  6406 net.cpp:411] ip1 -> ip1

……

//輸出準確率accuracy和損失loss

I0223 14:55:59.623518  6406 solver.cpp:408]     Test net output #0: accuracy = 0.1309

I0223 14:55:59.623611  6406 solver.cpp:408]     Test net output #1: loss = 2.31399 (* 1 = 2.31399 loss)

I0223 14:55:59.623705  6406 base_data_layer.cpp:115] Prefetch copied

I0223 14:55:59.625409  6410 data_layer.cpp:102] Prefetch batch: 1 ms.

I0223 14:55:59.625458  6410 data_layer.cpp:103]      Read time: 0.188 ms.

I0223 14:55:59.625468  6410 data_layer.cpp:104] Transform time: 1.022 ms.

I0223 14:55:59.874915  6406 solver.cpp:229] Iteration 0, loss = 2.30499

I0223 14:55:59.874987  6406 solver.cpp:245]     Train net output #0: loss = 2.30499 (* 1 = 2.30499 loss)

I0223 14:55:59.875026  6406 sgd_solver.cpp:106] Iteration 0, lr = 0.01

I0223 14:55:59.879196  6406 base_data_layer.cpp:115] Prefetch copied

I0223 14:55:59.881901  6410 data_layer.cpp:102] Prefetch batch: 2 ms.

I0223 14:55:59.881934  6410 data_layer.cpp:103]      Read time: 0.413 ms.

I0223 14:55:59.881945  6410 data_layer.cpp:104] Transform time: 1.312 ms.

I0223 14:56:00.149636  6406 base_data_layer.cpp:115] Prefetch copied

I0223 14:56:00.151485  6410 data_layer.cpp:102] Prefetch batch: 1 ms.

I0223 14:56:00.151515  6410 data_layer.cpp:103]      Read time: 0.195 ms.

I0223 14:56:00.151525  6410 data_layer.cpp:104] Transform time: 1.073 ms.

I0223 14:56:00.400404  6406 solver.cpp:458] Snapshotting to binary proto file examples/mnist/lenet_iter_3.caffemodel//保存當前訓練模型快照

I0223 14:56:00.400461  6406 net.cpp:918] Serializing 9 layers

//保存當前solver的快照

I0223 14:56:00.434458  6406 sgd_solver.cpp:273] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_3.solverstate

I0223 14:56:00.455166  6406 solver.cpp:323] Optimization Done.

I0223 14:56:00.455202  6406 caffe.cpp:222] Optimization Done.//程序運行結束


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章