caffe代碼詳細註解


Caffe net:init()函數代碼詳細註解

 

Caffe 中net的初始化函數init()是整個網絡創建的關鍵函數。在此對此函數做詳細的梳理。

 

一、代碼的總體介紹

           該init()函數中主要包括以下幾個函數:

1.     FilterNet(in_param,&filtered_param);

此函數的作用就是模型參數文件(*.prototxt)中的不符合規則的層去掉。例如:在caffe的examples/mnist中的lenet網絡中,如果只是用於網絡的前向,則需要將包含train的數據層去掉。如下:

layer {

  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}//在test計算中,此層就會調用函數FilterNet()被過濾掉


2、InsertSplits(filtered_param,&param);

此函數作用是,對於底層一個輸出blob對應多個上層的情況,則要在加入分裂層,形成新的網絡。這麼做的主要原因是多個層反傳給該blob的梯度需要累加。

例如:LeNet網絡中的數據層的top label blob對應兩個輸入層,分別是accuracy層和loss層,那麼需要在數據層在插入一層。如下圖:


數據層之上插入了一個新的層,label_mnist_1_split層,爲該層的創建兩個top blob分別爲,Label_mnist_1_split_0Label_mnist_1_split_1

3、layers_.push_back();

該行代碼是把當前層的參數轉換爲shared_ptr<Layer<Dtype>>,創建一個具體的層,並壓入到layers_

4、AppendBottom();

此函數爲該層創建bottom blob,由於網絡是堆疊而成,即:當前層的輸出 bottom是前一層的輸出top blob,因此此函數並沒沒有真正的創建blob,只是在將前一層的指針壓入到了bottom_vecs_中。

5、AppendTop();

此函數爲該層創建top blob,該函數真正的new的一個blob的對象。並將topblob 的指針壓入到top_vecs_中

 6、layers_[layer_id]->SetUp();

  前面創建了具體的層,併爲層創建了輸入bottom blob 和輸出top blob。改行代碼這是啓動該層,setup()函數的功能是爲創建的blob分配數據內存空間,如有必要還需要調整該層的輸入bottom blob 和輸出top blob的shape。

 7、AppendParam();

 對於某些有參數的層,例如:卷基層、全連接層有weight和bias。該函數主要是修改和參數有關的變量,實際的層參數的blob在上面提到的setup()函數中已經創建。如:將層參數blob的指針壓入到params_。

 

二、下面是對函數Net:init()的代碼的詳細註解。

template <typename Dtype>
void Net<Dtype>::Init(const NetParameter& in_param) {
  CHECK(Caffe::root_solver() || root_net_)
      << "root_net_ needs to be set for all non-root solvers";
  // Set phase from the state.
  phase_ = in_param.state().phase();
  // Filter layers based on their include/exclude rules and
  // the current NetState.
  NetParameter filtered_param;
  
  /*將in_param中的某些不符合規則的層去掉*/
  FilterNet(in_param, &filtered_param);
  LOG_IF(INFO, Caffe::root_solver())
      << "Initializing net from parameters: " << std::endl
      << filtered_param.DebugString();
  // Create a copy of filtered_param with splits added where necessary.
  NetParameter param;
  /*
  *調用InsertSplits()函數,對於底層的一個輸出blob對應多個上層的情況,
  *則要在加入分裂層,形成新的網絡。
  **/ 
  InsertSplits(filtered_param, ¶m);
/*
 *以上部分只是根據 *.prototxt文件,確定網絡name 和 blob的name的連接情況,
 *下面部分是層以及層間的blob的創建,函數ApendTop()中間blob的實例化
 *函數layer->SetUp()分配中間層blob的內存空間
 *appendparam()
 */
  // Basically, build all the layers and set up their connections.
  name_ = param.name();
  map<string, int> blob_name_to_idx;
  set<string> available_blobs;
  memory_used_ = 0;  
  // For each layer, set up its input and output 
  bottom_vecs_.resize(param.layer_size());//存每一層的輸入(bottom)blob指針 
  top_vecs_.resize(param.layer_size());//存每一層輸出(top)的blob指針
  bottom_id_vecs_.resize(param.layer_size());//存每一層輸入(bottom)blob的id
  param_id_vecs_.resize(param.layer_size());//存每一層參數blob的id
  top_id_vecs_.resize(param.layer_size());//存每一層輸出(top)的blob的id
  bottom_need_backward_.resize(param.layer_size());//該blob是需要返回的bool值

  //(很大的一個for循環)對每一層處理
  for (int layer_id = 0; layer_id < param.layer_size(); ++layer_id) {
    // For non-root solvers, whether this layer is shared from root_net_.
    bool share_from_root = !Caffe::root_solver()
        && root_net_->layers_[layer_id]->ShareInParallel();// ???
    // Inherit phase from net if unset.
    //如果當前層沒有設置phase,則將當前層phase設置爲網絡net 的phase
    if (!param.layer(layer_id).has_phase()) {
      param.mutable_layer(layer_id)->set_phase(phase_);
    }
    // Setup layer.
    // param.layers(i)返回的是關於第當前層的參數:
    const LayerParameter& layer_param = param.layer(layer_id); 
    if (layer_param.propagate_down_size() > 0) {
      CHECK_EQ(layer_param.propagate_down_size(),
          layer_param.bottom_size())
          << "propagate_down param must be specified "
          << "either 0 or bottom_size times ";
    }
    if (share_from_root) {
      LOG(INFO) << "Sharing layer " << layer_param.name() << " from root net";
      layers_.push_back(root_net_->layers_[layer_id]);
      layers_[layer_id]->SetShared(true);
    } else {
  	    /*
    	*把當前層的參數轉換爲shared_ptr<Layer<Dtype>>,
    	*創建一個具體的層,並壓入到layers_中 
    	*/
      layers_.push_back(LayerRegistry<Dtype>::CreateLayer(layer_param));
    }
	//把當前層的名字壓入到layer_names_:vector<string> layer_names_
    layer_names_.push_back(layer_param.name());
    LOG_IF(INFO, Caffe::root_solver())
        << "Creating Layer " << layer_param.name();
    bool need_backward = false;

    // Figure out this layer's input and output 
    //下面開始產生當前層:分別處理bottom的blob和top的blob兩個步驟 
    //輸入bottom blob
    for (int bottom_id = 0; bottom_id < layer_param.bottom_size();
         ++bottom_id) {
      const int blob_id = AppendBottom(param, layer_id, bottom_id,
                                       &available_blobs, &blob_name_to_idx);
      // If a blob needs backward, this layer should provide it.
      /*
      	*blob_need_backward_,整個網絡中,所有非參數blob,是否需要backward。
      	*注意,這裏所說的所有非參數blob其實指的是AppendTop函數中遍歷的所有top blob,
      	*並不是每一層的top+bottom,因爲這一層的top就是下一層的bottom,網絡是一層一層堆起來的。  
		*/
      need_backward |= blob_need_backward_[blob_id];
    }
	//輸出top blob
    int num_top = layer_param.top_size();
    for (int top_id = 0; top_id < num_top; ++top_id) {
      AppendTop(param, layer_id, top_id, &available_blobs, &blob_name_to_idx);
      // Collect Input layer tops as Net inputs.
      if (layer_param.type() == "Input") {
        const int blob_id = blobs_.size() - 1;
        net_input_blob_indices_.push_back(blob_id);
        net_input_blobs_.push_back(blobs_[blob_id].get());
      }
    }
    // If the layer specifies that AutoTopBlobs() -> true and the LayerParameter
    // specified fewer than the required number (as specified by
    // ExactNumTopBlobs() or MinTopBlobs()), allocate them here.
    Layer<Dtype>* layer = layers_[layer_id].get();
    if (layer->AutoTopBlobs()) {
      const int needed_num_top =
          std::max(layer->MinTopBlobs(), layer->ExactNumTopBlobs());
      for (; num_top < needed_num_top; ++num_top) {
        // Add "anonymous" top blobs -- do not modify available_blobs or
        // blob_name_to_idx as we don't want these blobs to be usable as input
        // to other layers.
        AppendTop(param, layer_id, num_top, NULL, NULL);
      }
    }
    // After this layer is connected, set it up.
    if (share_from_root) {
      // Set up size of top blobs using root_net_
      const vector<Blob<Dtype>*>& base_top = root_net_->top_vecs_[layer_id];
      const vector<Blob<Dtype>*>& this_top = this->top_vecs_[layer_id];
      for (int top_id = 0; top_id < base_top.size(); ++top_id) {
        this_top[top_id]->ReshapeLike(*base_top[top_id]);
        LOG(INFO) << "Created top blob " << top_id << " (shape: "
            << this_top[top_id]->shape_string() <<  ") for shared layer "
            << layer_param.name();
      }
    } else {
   	 // 在 SetUp()中爲 appendTop()中創建的Blob分配內存空間
      layers_[layer_id]->SetUp(bottom_vecs_[layer_id], top_vecs_[layer_id]);
    }
    LOG_IF(INFO, Caffe::root_solver())
        << "Setting up " << layer_names_[layer_id];
	
	//每次循環,都會更新向量blob_loss_weights    
    for (int top_id = 0; top_id < top_vecs_[layer_id].size(); ++top_id) {
		//blob_loss_weights_,每次遍歷一個layer的時候,都會resize blob_loss_weights_, 
		//然後調用模板類layer的loss函數返回loss_weight   
      if (blob_loss_weights_.size() <= top_id_vecs_[layer_id][top_id]) {
        blob_loss_weights_.resize(top_id_vecs_[layer_id][top_id] + 1, Dtype(0));
      }
	  //top_id_vecs_中存儲的最基本元素是blob_id -> 每一個新的blob都會賦予其一個blob_id,
	  //但是這個blob_id可能是會有重複的 
      blob_loss_weights_[top_id_vecs_[layer_id][top_id]] = layer->loss(top_id);
	  //loss函數返回loss_weight —> 在模板類的SetUp方法中會調用SetLossWeights來設置其私有數據成員loss_,
	  //裏面存儲的其實是loss_weight    
      LOG_IF(INFO, Caffe::root_solver())
          << "Top shape: " << top_vecs_[layer_id][top_id]->shape_string();
	  
      if (layer->loss(top_id)) {
        LOG_IF(INFO, Caffe::root_solver())
            << "    with loss weight " << layer->loss(top_id);
      }
	  //計算所需內存 
      memory_used_ += top_vecs_[layer_id][top_id]->count();
    }
    LOG_IF(INFO, Caffe::root_solver())
        << "Memory required for data: " << memory_used_ * sizeof(Dtype);

	/*
	*以下部分是對 每層的param blob 的處理,主要是AppendParam()函數,
	*將param blob 以及blob的ID添加到 params_,param_id_vecs_ 等
	*/
    const int param_size = layer_param.param_size();
	// 層內blob_的數量,即該層有幾個權重參數,每個blob內有一個參數,例如;cov層和IP層都有兩個參數
    const int num_param_blobs = layers_[layer_id]->blobs().size();
	//param_size是Layermeter類型對象layer_param中ParamSpec param成員的個數, 
	//num_param_blobs是一個Layer中learnable parameter blob的個數,param_size <= num_param_blobs 
    CHECK_LE(param_size, num_param_blobs)
        << "Too many params specified for layer " << layer_param.name();
    ParamSpec default_param_spec;
    for (int param_id = 0; param_id < num_param_blobs; ++param_id) {
      const ParamSpec* param_spec = (param_id < param_size) ? &layer_param.param(param_id) : &default_param_spec;
      const bool param_need_backward = param_spec->lr_mult() != 0;
	  //由param_need_backward來決定need_backward是否爲真,
	  //並且,只要有一次遍歷使得need_backward爲真,則這個for循環結束後,need_backward也爲真  
      need_backward |= param_need_backward;
      layers_[layer_id]->set_param_propagate_down(param_id,
                                                  param_need_backward);
    }
	/*
	*添加parameter blob,如果當前layer沒有parameter blob(num_param_blobs==0),
	*比如ReLU,那麼就不進入循環,不添加parameter blob    
 	*AppendParam只是執行爲當前layer添加parameter blob的相關工作,
 	*並不會修改與backward的相關屬性 
 	*/
    for (int param_id = 0; param_id < num_param_blobs; ++param_id) {
      AppendParam(param, layer_id, param_id);
    }
    // Finally, set the backward flag
    layer_need_backward_.push_back(need_backward);
	/*
	*在上述的AppendTop函數中,在遍歷當前層的每一個top blob的時候
	*都會將一個false(默認值)壓入向量blob_need_backward_。
	*在下面的代碼中,如果這個layer need backward,則會更新blob_need_backward_  
	*/
    if (need_backward) {
      for (int top_id = 0; top_id < top_id_vecs_[layer_id].size(); ++top_id) {
        blob_need_backward_[top_id_vecs_[layer_id][top_id]] = true;
      }
    }
  }
/*至此上面部分各個層被創建並啓動,下面部分是按後向順序修正backward設置  */
  
  // Go through the net backwards to determine which blobs contribute to the
  // loss.  We can skip backward computation for blobs that don't contribute
  // to the loss.
  // Also checks if all bottom blobs don't need backward computation (possible
  // because the skip_propagate_down param) and so we can skip bacward
  // computation for the entire layer
  /*
  *需要注意的是,上述代碼中關於backward設置的部分,是按照前向的順序設置的,
  *而下面的代碼是按後向順序修正前向設置的結果。    
  * 一個layer是否需要backward computation,主要依據兩個方面:
  *	(1)該layer的top blob 是否參與loss的計算;
  *	(2)該layer的bottom blob 是否需要backward computation,
  *    比如Data層一般就不需要backward computation 
  */
  set<string> blobs_under_loss;
  set<string> blobs_skip_backp;
  //反向,從後向前
  for (int layer_id = layers_.size() - 1; layer_id >= 0; --layer_id) {
    bool layer_contributes_loss = false;
    bool layer_skip_propagate_down = true;
	/*
	*爲true,則表示當前layer的bottom blob不需要backward computation
	*即該層不需要backward computation。    
	*這個局部變量所表示的意義與caffe.proto裏
	*message Layerparameter的propagate_down的定義恰好相反。
	*/
    for (int top_id = 0; top_id < top_vecs_[layer_id].size(); ++top_id) {
		 //blob_names_整個網絡中,所有非參數blob的name 
      const string& blob_name = blob_names_[top_id_vecs_[layer_id][top_id]];
      if (layers_[layer_id]->loss(top_id) ||
          (blobs_under_loss.find(blob_name) != blobs_under_loss.end())) {
        layer_contributes_loss = true;
      }
      if (blobs_skip_backp.find(blob_name) == blobs_skip_backp.end()) {
        layer_skip_propagate_down = false;
      }
      if (layer_contributes_loss && !layer_skip_propagate_down)
        break;
    }
    // If this layer can skip backward computation, also all his bottom blobs
    // don't need backpropagation
    if (layer_need_backward_[layer_id] && layer_skip_propagate_down) {
      layer_need_backward_[layer_id] = false;
      for (int bottom_id = 0; bottom_id < bottom_vecs_[layer_id].size();
               ++bottom_id) {
		//bottom_need_backward_,整個網絡所有網絡層的bottom blob是否需要backward  
        bottom_need_backward_[layer_id][bottom_id] = false;
      }
    }
    if (!layer_contributes_loss) { layer_need_backward_[layer_id] = false; }
    if (Caffe::root_solver()) {
      if (layer_need_backward_[layer_id]) {
        LOG(INFO) << layer_names_[layer_id] << " needs backward computation.";
      } else {
        LOG(INFO) << layer_names_[layer_id]
            << " does not need backward computation.";
      }
    }
	//修正前向設置的結果  
    for (int bottom_id = 0; bottom_id < bottom_vecs_[layer_id].size();
         ++bottom_id) {
      if (layer_contributes_loss) {
        const string& blob_name =
            blob_names_[bottom_id_vecs_[layer_id][bottom_id]];
        blobs_under_loss.insert(blob_name);//爲blobs_under_loss添加新元素  
      } else {
        bottom_need_backward_[layer_id][bottom_id] = false;
      }
      if (!bottom_need_backward_[layer_id][bottom_id]) {
        const string& blob_name =
                   blob_names_[bottom_id_vecs_[layer_id][bottom_id]];
        blobs_skip_backp.insert(blob_name);
      }
    }
  }
  // Handle force_backward if needed.
  if (param.force_backward()) {
    for (int layer_id = 0; layer_id < layers_.size(); ++layer_id) {
      layer_need_backward_[layer_id] = true;
      for (int bottom_id = 0;
           bottom_id < bottom_need_backward_[layer_id].size(); ++bottom_id) {
        bottom_need_backward_[layer_id][bottom_id] =
            bottom_need_backward_[layer_id][bottom_id] ||
            layers_[layer_id]->AllowForceBackward(bottom_id);
        blob_need_backward_[bottom_id_vecs_[layer_id][bottom_id]] =
            blob_need_backward_[bottom_id_vecs_[layer_id][bottom_id]] ||
            bottom_need_backward_[layer_id][bottom_id];
      }
      for (int param_id = 0; param_id < layers_[layer_id]->blobs().size();
           ++param_id) {
        layers_[layer_id]->set_param_propagate_down(param_id, true);
      }
    }
  }
  // In the end, all remaining blobs are considered output blobs.
  for (set<string>::iterator it = available_blobs.begin();
      it != available_blobs.end(); ++it) {
    LOG_IF(INFO, Caffe::root_solver())
        << "This network produces output " << *it;
    net_output_blobs_.push_back(blobs_[blob_name_to_idx[*it]].get());
    net_output_blob_indices_.push_back(blob_name_to_idx[*it]);
  }
  for (size_t blob_id = 0; blob_id < blob_names_.size(); ++blob_id) {
  	//第一次使用向量blob_names_index_,逐一添加元素,是一個map    
    blob_names_index_[blob_names_[blob_id]] = blob_id;
  }
  for (size_t layer_id = 0; layer_id < layer_names_.size(); ++layer_id) {
  	//第一次使用向量layer_names_index_,逐一添加元素,是一個map    
    layer_names_index_[layer_names_[layer_id]] = layer_id;
  }
  ShareWeights();
  debug_info_ = param.debug_info();
  LOG_IF(INFO, Caffe::root_solver()) << "Network initialization done.";
}

(完)

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章