【代碼使用】ghost-yolo--darknet轉caffe記錄

之前讀了一下ghostnet的文章,然後搭建了一下ghost-yolo-2layer的結構,這裏主要是記錄一下ghost-yolo轉caffe的流程~

論文筆記:https://blog.csdn.net/weixin_38715903/article/details/105285570

darknet:https://github.com/AlexeyAB/darknet/

darknet to caffe:https://github.com/marvis/pytorch-caffe-darknet-convert 

caffe-yolo:https://github.com/ChenYingpeng/caffe-yolov3

需要用到的文件:https://github.com/hualuluu/ghost-yolo

目錄

 

1.關於darknet

2.關於darknet to caffe

1).logistic激活函數是包含在conv的整體結構中:

2).scale_channels:

3).均值池化

3.具體操作

3.1.訓練

3.2.darknet to caffe

3.3. caffe-yolo


1.關於darknet

ghostnet用的darknet不是官方的,是這個https://github.com/AlexeyAB/darknet/,後面Alexey大神自己維護的,所以添加了一些原有darknet不支持的層。

2.關於darknet to caffe

用到的是這個https://github.com/marvis/pytorch-caffe-darknet-convert ,這裏面大大好幾年前提交的,所以有一些層已經不支持了

針對這跟問題,ghostnet中有logistic激活函數、scale_channels、avg pooling都要做一些改變和添加,不然轉出就有問題。

1).logistic激活函數是包含在conv的整體結構中:

            if block['activation'] == 'logistic':
                sigmoid_layer = OrderedDict()
                sigmoid_layer['bottom'] = bottom
                sigmoid_layer['top'] = bottom
                if block.has_key('name'):
                    sigmoid_layer['name'] = '%s-act' % block['name']
                else:
                    sigmoid_layer['name'] = 'layer%d-act' % layer_id
                sigmoid_layer['type'] = 'Sigmoid'
                layers.append(sigmoid_layer)
            #上面是添加代碼
            elif block['activation'] != 'linear':
                relu_layer = OrderedDict()
                relu_layer['bottom'] = bottom
                relu_layer['top'] = bottom
                if block.has_key('name'):
                    relu_layer['name'] = '%s-act' % block['name']
                else:
                    relu_layer['name'] = 'layer%d-act' % layer_id
                relu_layer['type'] = 'ReLU'
                if block['activation'] == 'leaky':
                    relu_param = OrderedDict()
                    relu_param['negative_slope'] = '0.1'
                    relu_layer['relu_param'] = relu_param
                layers.append(relu_layer)
            topnames[layer_id] = bottom
            layer_id = layer_id+1

2).scale_channels:

這個層主要是包括兩個操作,包括了reshape和scale兩種操作

        elif block['type'] == 'scale_channels':
            reshape_layer= OrderedDict()
            scale_layer = OrderedDict()
            layer_name = str(block['from'])
            
            if(int(block['from'])>0):
                prev_layer_id1=int(block['from'])+1
            else:
                prev_layer_id1 = layer_id + int(block['from'])


            reshape_bottom=topnames[layer_id-1]
            reshape_layer['bottom'] = reshape_bottom
            reshape_param = OrderedDict()
            reshape_param['shape']=OrderedDict()
            reshape_param['shape']['dim']=['0']
            reshape_param['shape']['dim'].append('0')
            reshape_layer['reshape_param']=reshape_param
            reshape_layer['type']='Reshape'
            reshape_layer['name']='layer%d-reshape' % layer_id
            reshape_layer['top']='layer%d-reshape' % layer_id
            layers.append(reshape_layer)
            print('~~~~~~~~~~')
            
            bottom1 = topnames[prev_layer_id1]
            bottom2 = reshape_layer['name']
            scale_layer['bottom'] = [bottom1, bottom2]
            if block.has_key('name'):
                    scale_layer['top'] = block['name']
                    scale_layer['name'] = block['name']
            else:
                    scale_layer['top'] = 'layer%d-scale' % layer_id
                    scale_layer['name'] = 'layer%d-scale' % layer_id
            scale_param = OrderedDict()
            filler=OrderedDict()
            bias_filler=OrderedDict()
            filler['value']='1.0'
            bias_filler['value']='0'
            scale_param['axis']='0'
            scale_param['bias_term']='false'
            #scale_param['filler']=filler
            #scale_param['bias_filler']=bias_filler
            scale_layer['type'] = 'Scale'
            scale_layer['scale_param']=scale_param
            
            layers.append(scale_layer)
            bottom = scale_layer['top']
            print(layer_id)
            topnames[layer_id] = bottom
            layer_id = layer_id + 1

3).均值池化

darknet中的均值池化不需要設定參數,默認輸出的feature map的大小爲1*1,但是caffe裏需要有池化核的size,就是輸入特徵feature map的長寬(input_w,input_h),所以轉caffe的時候要注意。

3.具體操作

https://github.com/hualuluu/ghost-yolo中下載cfg和darknet2caffe.py文件

3.1.訓練

訓練過程就pass了【不會的看這裏-https://blog.csdn.net/weixin_38715903/article/details/103695844】,下載cfg文件,按照darknet的步驟訓練就行了;

這一步會得到ghostnet-yolo.weights

3.2.darknet to caffe

git clone https://github.com/marvis/pytorch-caffe-darknet-convert 

然後下載darknet2caffe.py,替換原有的darknet2caffe.py

注意!!下載的darknet2caffe.py中有一些路徑需要改成自己的,比如說你的caffe路徑【需要一些caffe頭文件】

你需要把訓練的cfg文件copy到pytorch-caffe-darknet-convert/cfg文件中,並且根據avg pooling的輸入特徵尺寸,修改卷積核尺寸參數:

[avgpool]

改爲:這裏34,60是因爲輸入的feature map尺寸是這個我們需要通過均值池化得到1*1的尺寸
[avgpool]
size_h=34
size_w=60
再運行:
sudo python2.7 darknet2caffe.py  cfg/ghostnet-yolo.cfg ghostnet-yolo.weights ghostnet-yolo.prototxt ghostnet-yolo.caffemodel

這一步會得到ghostnet-yolo.prototxt和ghostnet-yolo.caffemodal

3.3. caffe-yolo

git clone https://github.com/ChenYingpeng/caffe-yolov3
 
cd caffe-yolov3
  • 將生成的caffemodel和prototxt放在./caffemodel和./prototxt文件下【沒有就建一個】
  • 修改cmakelist.txt
"""全部都要改成自己的caffe路徑"""
# build C/C++ interface
include_directories(${PROJECT_INCLUDE_DIR} ${GIE_PATH}/include)
include_directories(${PROJECT_INCLUDE_DIR} 
	/home/ubuntu247/liliang/caffe-ssd/include 
	/home/ubuntu247/liliang/caffe-ssd/build/include 
)
 
 
file(GLOB inferenceSources *.cpp *.cu )
file(GLOB inferenceIncludes *.h )
 
cuda_add_library(yolov3-plugin SHARED ${inferenceSources})
target_link_libraries(yolov3-plugin 
	/home/ubuntu247/liliang/caffe-ssd/build/lib/libcaffe.so  
	/usr/lib/x86_64-linux-gnu/libglog.so  
	/usr/lib/x86_64-linux-gnu/libgflags.so.2
    	/usr/lib/x86_64-linux-gnu/libboost_system.so  
	/usr/lib/x86_64-linux-gnu/libGLEW.so.1.13  
)
  • 如果你在訓練中使用的是自己的anchors值,要修改anchors的值(yolo.cpp中),再進行編譯;還有yolo.h中的classes數
/*
 * Company:	Synthesis
 * Author: 	Chen
 * Date:	2018/06/04
 */
 
#include "yolo_layer.h"
#include "blas.h"
#include "cuda.h"
#include "activations.h"
#include "box.h"
#include <stdio.h>
#include <math.h>
 
//yolov3
//float biases[18] = {10,13,16,30,33,23,30,61,62,45,59,119,116,90,156,198,373,326};
float biases[18] = {7, 15, 16, 18, 22, 32, 9, 40, 20, 71, 37, 39, 52, 65, 70, 110, 105, 208};
/*
 * Company:	Synthesis
 * Author: 	Chen
 * Date:	2018/06/04	
 */
 
#ifndef __YOLO_LAYER_H_
#define __YOLO_LAYER_H_
#include <caffe/caffe.hpp>
#include <string>
#include <vector>
 
using namespace caffe;
 
 
const int classes = 3;
const float thresh = 0.5;
const float hier_thresh = 0.5;
const float nms_thresh = 0.5;
const int num_bboxes = 3;
const int relative = 1;
  • 編譯
mkdir build
cd build
cmake ..
make -j12
  • 運行
 ./x86_64/bin/detectnet ../prototxt/ghost-yolo.prototxt ../caffemodel/host-yolo.caffemodel ../images/bicycle.jpg
  • 結果

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章