caffe源碼 Net的Init()及Netparameter 詳解

Net完成的工作主要是:根據咱們寫的prototxt網絡結構文件,完成層之間的連接和初始化。

這裏先介紹其數據結構Netparameter ,再詳解源碼。關於protobuf的前置知識可以參考之前的博文

message NetParameter {
  optional string name = 1; // consider giving the network a name網絡名
  // DEPRECATED. See InputParameter. The input blobs to the network.
  repeated string input = 3;
  // DEPRECATED. See InputParameter. The shape of the input blobs.輸入blob的形狀,嵌套了BlobShape,下面介紹
  repeated BlobShape input_shape = 8;

  // 4D input dimensions -- deprecated.  Use "input_shape" instead.
  // If specified, for each input blob there should be four
  // values specifying the num, channels, height and width of the input blob.
  // Thus, there should be a total of (4 * #input) numbers.
  repeated int32 input_dim = 4;

  // Whether the network will force every layer to carry out backward operation.
  // If set False, then whether to carry out backward is determined
  // automatically according to the net structure and learning rates.是否要求每層都必須反向傳播
  optional bool force_backward = 5 [default = false];
  // The current "state" of the network, including the phase, level, and stage.
  // Some layers may be included/excluded depending on this state and the states
  // specified in the layers' include and exclude fields.嵌套網絡狀態
  optional NetState state = 6;

  // Print debugging information about results while running Net::Forward,
  // Net::Backward, and Net::Update.是否在訓練網絡時,打印結果調試信息
  optional bool debug_info = 7 [default = false];

  // The layers that make up the net.  Each of their configurations, including
  // connectivity and behavior, is specified as a LayerParameter.嵌套層參數,包括連接和具體操作
  repeated LayerParameter layer = 100;  // ID 100 so layers are printed last.

  // DEPRECATED: use 'layer' instead.老版本的層參數設置,已經被新layer定義代替了,保留可能只是爲了支持舊版本吧。
  repeated V1LayerParameter layers = 2;
}


網絡的初始化是solver調用完成的,其過程爲:

Solver()構造函數  ->  Init(param)  ->  InitTrainNet()  ->  net_.reset(new Net(net_param))


網絡的初始化過程主要有以下幾部分:

1.網絡結構預處理:

FilterNet(in_param, &filtered_param): 將protobuf描述的網絡結構,根據網絡狀態等要求,轉換成網絡在某種狀態下運行的結構.

InsertSplits(filtered_param, &param): 當一個bottom blob被多個層共享時,插入個split層,將blob分成多份.這麼做的主要原因是多個層反傳給該blob的梯度需要累加。



2.逐層依次添加bootom blob, top blob ,param blob:

Net<Dtype>::AppendBottom() :

爲各層創建bottom blob,由於當前層的輸入blob是前一層的輸出blob。因此,此函數並沒沒有真正的創建blob,只是在將前一層的指針壓入到了bottom_vecs_中。

Net<Dtype>::AppendTop():

爲各層創建top blob,該函數真正的爲blob分配內存空間的對象,將其指針壓入到top_vecs_中。

SetUp(bottom_vecs_[layer_id], top_vecs_[layer_id]):

爲創建的參數blob分配數據內存空間,如有必要還需要調整該層的輸入bottom blob 和輸出top blob的shape。

Net<Dtype>::AppendParam():

修改和參數有關的變量,實際的層參數的blob在上面提到的setup()函數中已經創建。如:將層參數blob的指針壓入到params_。

下面爲關於Net的Init()初始化的源碼部分:

#include <algorithm>
#include <map>
#include <set>
#include <string>
#include <utility>
#include <vector>

#include "hdf5.h"

#include "caffe/common.hpp"
#include "caffe/layer.hpp"
#include "caffe/net.hpp"
#include "caffe/parallel.hpp"
#include "caffe/proto/caffe.pb.h"
#include "caffe/util/hdf5.hpp"
#include "caffe/util/insert_splits.hpp"
#include "caffe/util/math_functions.hpp"
#include "caffe/util/upgrade_proto.hpp"

namespace caffe {
//構造函數是獲得解析的protobuf網絡參數後,調用Init進行初始化
template <typename Dtype>
Net<Dtype>::Net(const NetParameter& param) {
  Init(param);
}
//關於NetState參數的解析可以之前發的博文。
template <typename Dtype>
Net<Dtype>::Net(const string& param_file, Phase phase,
    const int level, const vector<string>* stages) {
  NetParameter param;
  //從prototxt文件中讀取網絡參數設置。
  ReadNetParamsFromTextFileOrDie(param_file, ¶m);
  //將網絡狀態保存在數據結構Netparameter中嵌套的NetState數據結構中,藉助protobuf生成的相應的setter函數。
  param.mutable_state()->set_phase(phase);
  if (stages != NULL) {
    for (int i = 0; i < stages->size(); i++) {
      param.mutable_state()->add_stage((*stages)[i]);
    }
  }
  param.mutable_state()->set_level(level);
  Init(param);
}

template <typename Dtype>
void Net<Dtype>::Init(const NetParameter& in_param) {
  // Set phase from the state.根據輸入的NetParameter中存儲的phase,設置類成員變量
  phase_ = in_param.state().phase();
  // Filter layers based on their include/exclude rules and
  // the current NetState.
  //將輸入的NetParameter變量,根據其狀態和包含/濾除規則,來生成當前網絡狀態下的網絡參數。
  NetParameter filtered_param;
  FilterNet(in_param, &filtered_param);
  LOG_IF(INFO, Caffe::root_solver())
      << "Initializing net from parameters: " << std::endl
      << filtered_param.DebugString();
  // Create a copy of filtered_param with splits added where necessary.
  // InsertSplits函數來自caffe/util/insert_splits.hpp,其註釋爲:
  // Copy NetParameters with SplitLayers added to replace any shared bottom
  // blobs with unique bottom blobs provided by the SplitLayer.
  // 在需要的地方插入SplitLayers來將任何多層共享的bottom blobs替換成的單個bottom blob+SplitLayer
  NetParameter param;
  InsertSplits(filtered_param, ¶m);
  // Basically, build all the layers and set up their connections.
  // 之後的代碼,就是建立所有的層和他們之間的鏈接。
  name_ = param.name();
  // 定義map容器,建立輸入字符串(blob名字)得到其索引的映射。
  map<string, int> blob_name_to_idx;
  // 定義set容器,存儲所有可獲得的blobs。
  set<string> available_blobs;
  memory_used_ = 0;
  // For each layer, set up its input and output
  // 接下來設置每層的輸入輸出。
  bottom_vecs_.resize(param.layer_size());//根據網絡中層的數量,獲得bottom_vecs_(存儲指向所有bottom blob指針的vector容器)的大小。
  top_vecs_.resize(param.layer_size());//根據網絡中層的數量,獲得top_vecs_(存儲指向所有top blob指針的vector容器)的大小。
  bottom_id_vecs_.resize(param.layer_size());//根據網絡中層的數量,獲得bottom_id_vecs_(存儲所有bottom blob的id的vector容器)的大小。
  param_id_vecs_.resize(param.layer_size());//根據網絡中層的數量,獲得param_id_vecs_(存儲所有param參數 blob的id的vector容器)的大小。
  top_id_vecs_.resize(param.layer_size());//根據網絡中層的數量,獲得top_id_vecs_(存儲所有top blob的id的vector容器)的大小。
  bottom_need_backward_.resize(param.layer_size());//根據網絡中層的數量,獲得bottom_need_backward_(存儲所有bottom blob是否需要反向傳播的vector容器)的大小。
  //逐層設置以上vector內容
  for (int layer_id = 0; layer_id < param.layer_size(); ++layer_id) {
    // Inherit phase from net if unset.
	// 如果沒有設置層的phase,就從網絡的phase繼承,因爲能通過FilterNet留下來肯定符合該phase
    if (!param.layer(layer_id).has_phase()) {
      param.mutable_layer(layer_id)->set_phase(phase_);
    }
    // Setup layer.
    // 設置層參數
    const LayerParameter& layer_param = param.layer(layer_id);
    // 判斷反向傳播的層的梯度數量,即propagate_down的大小,是否與bottom_size吻合。不反向傳播,大小就該是0
    // propagate_down:與bottom blob相等長度的bool型向量,每個索引表示是否將誤差梯度向下傳播至bottom blob對應索引處
    if (layer_param.propagate_down_size() > 0) {
      CHECK_EQ(layer_param.propagate_down_size(),
          layer_param.bottom_size())
          << "propagate_down param must be specified "
          << "either 0 or bottom_size times ";
    }
    // layer_ 是存儲shared_ptr的vector容器。根據層參數創建(註冊)該層,並返回shared_ptr智能指針,存入layer_
    layers_.push_back(LayerRegistry<Dtype>::CreateLayer(layer_param));
    // layer_names_ 是存儲字符串的vector容器。將層名存入layer_names_
    layer_names_.push_back(layer_param.name());
    // 這時就會輸出我們在log文件中可以經常看到的Creating Layer + 層名
    LOG_IF(INFO, Caffe::root_solver())
        << "Creating Layer " << layer_param.name();
    bool need_backward = false;

    // Figure out this layer's input and output
    // 搞清楚層的輸入輸出
    // 逐個bottom blob設置bottom_vecs_、bottom_id_vecs_、available_blobs、bottom_need_backward_
    // 其中blob_name_to_idx在輸入層初始化過了 blob_name_to_idx[blob_name] = i
    for (int bottom_id = 0; bottom_id < layer_param.bottom_size();
         ++bottom_id) {
      //此函數爲該層創建bottom blob,由於網絡是堆疊而成,即:當前層的輸出 bottom是前一層的輸出top blob
      //因此此函數並沒沒有真正的創建blob,只是在將前一層的指針壓入到了bottom_vecs_中。
      const int blob_id = AppendBottom(param, layer_id, bottom_id,
                                       &available_blobs, &blob_name_to_idx);
      // If a blob needs backward, this layer should provide it.
      // 只要該層的任何一個bottom blob需要反向傳播,那麼本層需要反向。
      need_backward |= blob_need_backward_[blob_id];
    }
    // 逐個top blob設置top_vecs_、top_id_vecs_、available_blobs、bottom_need_backward_
    // 並將新創立的top blob存入blob_、blob_names_
    // 其中blob_name_to_idx在輸入層初始化過了 blob_name_to_idx[blob_name] = i
    int num_top = layer_param.top_size();
    for (int top_id = 0; top_id < num_top; ++top_id) {
      // 此函數爲該層創建top blob,該函數真正的new的一個blob的對象。並將topblob 的指針壓入到top_vecs_中
      AppendTop(param, layer_id, top_id, &available_blobs, &blob_name_to_idx);
      // Collect Input layer tops as Net inputs.
      // 將輸入層的top作爲網絡輸入,並收集起來。
      if (layer_param.type() == "Input") {
        const int blob_id = blobs_.size() - 1;
        net_input_blob_indices_.push_back(blob_id);
        net_input_blobs_.push_back(blobs_[blob_id].get());
      }
    }
    // If the layer specifies that AutoTopBlobs() -> true and the LayerParameter
    // specified fewer than the required number (as specified by
    // ExactNumTopBlobs() or MinTopBlobs()), allocate them here.
    // 如果設置了自動生成top blob,具體可參看關於之前關於layer的博文:
    // 自動創建top blobs來滿足ExactNumTopBlobs()和MinTopBlobs()的需要
    Layer<Dtype>* layer = layers_[layer_id].get();
    if (layer->AutoTopBlobs()) {
      const int needed_num_top =
          std::max(layer->MinTopBlobs(), layer->ExactNumTopBlobs());
      for (; num_top < needed_num_top; ++num_top) {
        // Add "anonymous" top blobs -- do not modify available_blobs or
        // blob_name_to_idx as we don't want these blobs to be usable as input
        // to other layers.
        AppendTop(param, layer_id, num_top, NULL, NULL);
      }
    }
    // After this layer is connected, set it up.
    // 前面創建了具體的層,併爲層創建了輸入bottom blob 和輸出top blob。
    // 層都連接起來後,就調用層的SetUp函數,輸入bottom blob 和top blob 的智能指針,建立層。
    //setup()函數的功能是爲創建的參數blob分配數據內存空間,如有必要還需要調整該層的輸入bottom blob 和輸出top blob的shape。
    layers_[layer_id]->SetUp(bottom_vecs_[layer_id], top_vecs_[layer_id]);
    LOG_IF(INFO, Caffe::root_solver())
        << "Setting up " << layer_names_[layer_id];
    // 有多少個top_id_vecs_就需要多少blob_loss_weights_
    for (int top_id = 0; top_id < top_vecs_[layer_id].size(); ++top_id) {
      if (blob_loss_weights_.size() <= top_id_vecs_[layer_id][top_id]) {
        blob_loss_weights_.resize(top_id_vecs_[layer_id][top_id] + 1, Dtype(0));
      }
      blob_loss_weights_[top_id_vecs_[layer_id][top_id]] = layer->loss(top_id);
      // log輸出中常見的Top shape: ,關於shape_string()可以參考之前關於blob的blob
      LOG_IF(INFO, Caffe::root_solver())
          << "Top shape: " << top_vecs_[layer_id][top_id]->shape_string();
      if (layer->loss(top_id)) {
        LOG_IF(INFO, Caffe::root_solver())
            << "    with loss weight " << layer->loss(top_id);
      }
      // 調用blob類的count函數來,來計算佔用的空間。
      memory_used_ += top_vecs_[layer_id][top_id]->count();
    }
    // log輸出中常見的Memory required for data:
    LOG_IF(INFO, Caffe::root_solver())
        << "Memory required for data: " << memory_used_ * sizeof(Dtype);

    const int param_size = layer_param.param_size();
    // 層內blob_的數量,即該層有幾個權重參數,每個blob內有一個參數,例如;cov層和IP層都有兩個參數
    const int num_param_blobs = layers_[layer_id]->blobs().size();
    //param_size是Layermeter類型對象layer_param中ParamSpec param成員的個數,
    //num_param_blobs是一個Layer中learnable parameter blob的個數,
    // 要 param_size <= num_param_blobs
    CHECK_LE(param_size, num_param_blobs)
        << "Too many params specified for layer " << layer_param.name();
    ParamSpec default_param_spec;
    for (int param_id = 0; param_id < num_param_blobs; ++param_id) {
      const ParamSpec* param_spec = (param_id < param_size) ?
          &layer_param.param(param_id) : &default_param_spec;
      //學習率不爲0則爲需要反向傳播。
      const bool param_need_backward = param_spec->lr_mult() != 0;
      need_backward |= param_need_backward;
      layers_[layer_id]->set_param_propagate_down(param_id,
                                                  param_need_backward);
    }
    for (int param_id = 0; param_id < num_param_blobs; ++param_id) {
      // 爲網絡增加新的參數blob,只加有參數的層的param blob
      // 對於某些有參數的層,例如:卷基層、全連接層有weight和bias。
      // 該函數主要是修改和參數有關的變量,實際的層參數的blob在上面提到的setup()函數中已經創建。如:將層參數blob的指針壓入到params_。
      AppendParam(param, layer_id, param_id);
    }
    // Finally, set the backward flag
    // 最後,設置反向標誌
    layer_need_backward_.push_back(need_backward);
    if (need_backward) {
      for (int top_id = 0; top_id < top_id_vecs_[layer_id].size(); ++top_id) {
        blob_need_backward_[top_id_vecs_[layer_id][top_id]] = true;
      }
    }
  }
  /*至此上面部分各個層被創建並啓動,下面部分是按後向順序修正backward設置  */ 
  // Go through the net backwards to determine which blobs contribute to the
  // loss.  We can skip backward computation for blobs that don't contribute
  // to the loss.
  // Also checks if all bottom blobs don't need backward computation (possible
  // because the skip_propagate_down param) and so we can skip bacward
  // computation for the entire layer
  // 之前都是前向依次設置反向的,下面的是按後向順序修正前向設置:
  // 可以跳過對loss沒貢獻層的反向計算,同時檢查是否所有bottom blob都需要反向計算。
  // 因此,定義了兩個set來存需要/不需要反向的blob的名字
  set<string> blobs_under_loss;
  set<string> blobs_skip_backp;
  //反向id
  for (int layer_id = layers_.size() - 1; layer_id >= 0; --layer_id) {
    bool layer_contributes_loss = false;
    bool layer_skip_propagate_down = true;
    for (int top_id = 0; top_id < top_vecs_[layer_id].size(); ++top_id) {
      const string& blob_name = blob_names_[top_id_vecs_[layer_id][top_id]];
      if (layers_[layer_id]->loss(top_id) ||
          (blobs_under_loss.find(blob_name) != blobs_under_loss.end())) {
        layer_contributes_loss = true;
      }
      if (blobs_skip_backp.find(blob_name) == blobs_skip_backp.end()) {
        layer_skip_propagate_down = false;
      }
      if (layer_contributes_loss && !layer_skip_propagate_down)
        break;
    }
    // If this layer can skip backward computation, also all his bottom blobs
    // don't need backpropagation
    if (layer_need_backward_[layer_id] && layer_skip_propagate_down) {
      layer_need_backward_[layer_id] = false;
      for (int bottom_id = 0; bottom_id < bottom_vecs_[layer_id].size();
               ++bottom_id) {
        bottom_need_backward_[layer_id][bottom_id] = false;
      }
    }
    if (!layer_contributes_loss) { layer_need_backward_[layer_id] = false; }
    if (Caffe::root_solver()) {
      if (layer_need_backward_[layer_id]) {
        LOG(INFO) << layer_names_[layer_id] << " needs backward computation.";
      } else {
        LOG(INFO) << layer_names_[layer_id]
            << " does not need backward computation.";
      }
    }
    //修正前向設置的反向傳播要求
    for (int bottom_id = 0; bottom_id < bottom_vecs_[layer_id].size();
         ++bottom_id) {
      if (layer_contributes_loss) {
        const string& blob_name =
            blob_names_[bottom_id_vecs_[layer_id][bottom_id]];
        blobs_under_loss.insert(blob_name);
      } else {
        bottom_need_backward_[layer_id][bottom_id] = false;
      }
      if (!bottom_need_backward_[layer_id][bottom_id]) {
        const string& blob_name =
                   blob_names_[bottom_id_vecs_[layer_id][bottom_id]];
        blobs_skip_backp.insert(blob_name);
      }
    }
  }
  // Handle force_backward if needed.
  if (param.force_backward()) {
    for (int layer_id = 0; layer_id < layers_.size(); ++layer_id) {
      layer_need_backward_[layer_id] = true;
      for (int bottom_id = 0;
           bottom_id < bottom_need_backward_[layer_id].size(); ++bottom_id) {
        bottom_need_backward_[layer_id][bottom_id] =
            bottom_need_backward_[layer_id][bottom_id] ||
            layers_[layer_id]->AllowForceBackward(bottom_id);
        blob_need_backward_[bottom_id_vecs_[layer_id][bottom_id]] =
            blob_need_backward_[bottom_id_vecs_[layer_id][bottom_id]] ||
            bottom_need_backward_[layer_id][bottom_id];
      }
      for (int param_id = 0; param_id < layers_[layer_id]->blobs().size();
           ++param_id) {
        layers_[layer_id]->set_param_propagate_down(param_id, true);
      }
    }
  }
  // In the end, all remaining blobs are considered output blobs.
  // 最終,所有還在available_blobs中的blob都會被視爲輸出。
  for (set<string>::iterator it = available_blobs.begin();
      it != available_blobs.end(); ++it) {
    LOG_IF(INFO, Caffe::root_solver())
        << "This network produces output " << *it;
    net_output_blobs_.push_back(blobs_[blob_name_to_idx[*it]].get());
    net_output_blob_indices_.push_back(blob_name_to_idx[*it]);
  }
  for (size_t blob_id = 0; blob_id < blob_names_.size(); ++blob_id) {
    blob_names_index_[blob_names_[blob_id]] = blob_id;
  }
  for (size_t layer_id = 0; layer_id < layer_names_.size(); ++layer_id) {
    layer_names_index_[layer_names_[layer_id]] = layer_id;
  }
  ShareWeights();
  debug_info_ = param.debug_info();
  LOG_IF(INFO, Caffe::root_solver()) << "Network initialization done.";
}
//根據網絡當前狀態和包含/濾除原則,按照要求濾去一些層
template <typename Dtype>
void Net<Dtype>::FilterNet(const NetParameter& param,
    NetParameter* param_filtered) {
  NetState net_state(param.state());
  //藉助protobuf生成的數據結構API,將輸入網絡參數param拷貝到param_filtered
  param_filtered->CopyFrom(param);
  //藉助protobuf生成的數據結構API,將param_filtered的layer這一屬性清空。
  param_filtered->clear_layer();
  //逐層檢查該層是否有包含/濾除規則,如果有,則按規則刪減,可參看之前關於stage level 博客看具體解釋。
  for (int i = 0; i < param.layer_size(); ++i) {
    const LayerParameter& layer_param = param.layer(i);
    const string& layer_name = layer_param.name();
    //逐層檢查該層是否有包含/濾除規則
    CHECK(layer_param.include_size() == 0 || layer_param.exclude_size() == 0)
          << "Specify either include rules or exclude rules; not both.";
    // If no include rules are specified, the layer is included by default and
    // only excluded if it meets one of the exclude rules.
    // 如果有,則按規則刪減
    bool layer_included = (layer_param.include_size() == 0);
    for (int j = 0; layer_included && j < layer_param.exclude_size(); ++j) {
      if (StateMeetsRule(net_state, layer_param.exclude(j), layer_name)) {
        layer_included = false;
      }
    }
    for (int j = 0; !layer_included && j < layer_param.include_size(); ++j) {
      if (StateMeetsRule(net_state, layer_param.include(j), layer_name)) {
        layer_included = true;
      }
    }
    if (layer_included) {
      param_filtered->add_layer()->CopyFrom(layer_param);
    }
  }
}
//檢查該層的狀態(phase level stage)是否符合要求,返回bool來決定是否包含該層。
template <typename Dtype>
bool Net<Dtype>::StateMeetsRule(const NetState& state,
    const NetStateRule& rule, const string& layer_name) {
  // Check whether the rule is broken due to phase.
  if (rule.has_phase()) {
      if (rule.phase() != state.phase()) {
        LOG_IF(INFO, Caffe::root_solver())
            << "The NetState phase (" << state.phase()
            << ") differed from the phase (" << rule.phase()
            << ") specified by a rule in layer " << layer_name;
        return false;
      }
  }
  // Check whether the rule is broken due to min level.
  if (rule.has_min_level()) {
    if (state.level() < rule.min_level()) {
      LOG_IF(INFO, Caffe::root_solver())
          << "The NetState level (" << state.level()
          << ") is above the min_level (" << rule.min_level()
          << ") specified by a rule in layer " << layer_name;
      return false;
    }
  }
  // Check whether the rule is broken due to max level.
  if (rule.has_max_level()) {
    if (state.level() > rule.max_level()) {
      LOG_IF(INFO, Caffe::root_solver())
          << "The NetState level (" << state.level()
          << ") is above the max_level (" << rule.max_level()
          << ") specified by a rule in layer " << layer_name;
      return false;
    }
  }
  // Check whether the rule is broken due to stage. The NetState must
  // contain ALL of the rule's stages to meet it.
  for (int i = 0; i < rule.stage_size(); ++i) {
    // Check that the NetState contains the rule's ith stage.
    bool has_stage = false;
    for (int j = 0; !has_stage && j < state.stage_size(); ++j) {
      if (rule.stage(i) == state.stage(j)) { has_stage = true; }
    }
    if (!has_stage) {
      LOG_IF(INFO, Caffe::root_solver())
          << "The NetState did not contain stage '" << rule.stage(i)
          << "' specified by a rule in layer " << layer_name;
      return false;
    }
  }
  // Check whether the rule is broken due to not_stage. The NetState must
  // contain NONE of the rule's not_stages to meet it.
  for (int i = 0; i < rule.not_stage_size(); ++i) {
    // Check that the NetState contains the rule's ith not_stage.
    bool has_stage = false;
    for (int j = 0; !has_stage && j < state.stage_size(); ++j) {
      if (rule.not_stage(i) == state.stage(j)) { has_stage = true; }
    }
    if (has_stage) {
      LOG_IF(INFO, Caffe::root_solver())
          << "The NetState contained a not_stage '" << rule.not_stage(i)
          << "' specified by a rule in layer " << layer_name;
      return false;
    }
  }
  return true;
}

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章