深度學習實踐經驗:用Faster R-CNN訓練Caltech數據集——修改讀寫接口

前言

這部分主要講如何修改Faster R-CNN的代碼,來訓練自己的數據集,首先確保你已經編譯安裝了py-faster-rcnn,並且準備好了數據集,具體可參考我上一篇文章

py-faster-rcnn文件結構

  • caffe-fast-rcnn
    這裏是caffe框架目錄,用來進行caffe編譯安裝
  • data
    用來存放pre trained模型,比如ImageNet上的,要訓練的數據集以及讀取文件的cache緩存。
  • experiments
    存放配置文件,運行的log文件,另外這個目錄下有scripts 用來獲取imagenet的模型,以及作者訓練好的fast rcnn模型,以及相應的pascal-voc數據集
  • lib
    用來存放一些python接口文件,如其下的datasets主要負責數據庫讀取,config負責cnn一些訓練的配置選項
  • matlab
    放置matlab與python的接口,用matlab來調用實現detection
  • models
    裏面存放了三個模型文件,小型網絡的ZF,大型網絡VGG16,中型網絡VGG_CNN_M_1024
  • output
    這裏存放的是訓練完成後的輸出目錄,默認會在default文件夾下
  • tools
    裏面存放的是訓練和測試的Python文件

修改訓練代碼

所要操作文件結構介紹

所有需要修改的訓練代碼都放到了py-faster-rcnn/lib文件夾下,我們進入文件夾,裏面主要用到的文件夾有:

  • datasets:該目錄下主要存放讀寫數據接口。
  • fast-rcnn:該目錄下主要存放的是python的訓練和測試腳本,以及訓練的配置文件。
  • roi_data_layer:該目錄下主要存放一些ROI處理操作文件。
  • utils:該目錄下主要存放一些通用操作比如非極大值nms,以及計算bounding box的重疊率等常用功能。

讀寫數據接口都放在datasets/文件夾下,我們進入文件夾,裏面主要文件有:

  • factory.py:這是個工廠類,用類生成imdb類並且返回數據庫共網絡訓練和測試使用。
  • imdb.py:這是數據庫讀寫類的基類,分裝了許多db的操作,但是具體的一些文件讀寫需要繼承繼續讀寫
  • pascal_voc.py:這是imdb的子類,裏面定義許多函數用來進行所有的數據讀寫操作。

從上面可以看出,我們主要對pascal_voc.py文件進行修改。

pascal_voc.py文件代碼分析

我們主要是基於pasca_voc.py這個文件進行修改,裏面有幾個重要的函數需要介紹:

def __init__(self, image_set, devkit_path=None): # 這個是初始化函數,它對應着的是pascal_voc的數據集訪問格式。

def image_path_at(self, i): # 根據第i個圖像樣本返回其對應的path,其調用image_path_from_index(self, index):作爲其具體實現。

def image_path_from_index(self, index): # 實現了 image_path的具體功能

def _load_image_set_index(self): # 加載了樣本的list文件,根據ImageSet/Main/文件夾下的文件進行image_index的加載。

def _get_default_path(self): # 獲得數據集地址

def gt_roidb(self): # 讀取並返回ground_truth的db

def rpn_roidb(self): # 加載rpn產生的roi,調用_load_rpn_roidb(self, gt_roidb):函數作爲其具體實現

def _load_rpn_roidb(self, gt_roidb): # 加載rpn_file

def _load_pascal_annotation(self, index): # 這個函數是讀取gt的具體實現

def _write_voc_results_file(self, all_boxes): # 將voc的檢測結果寫入到文件

def _do_python_eval(self, output_dir = 'output'): # 根據python的evluation接口來做結果的分析

修改pascal_voc.py文件

要想對自己的數據集進行讀取,我們主要是進行pascal_voc.py文件的修改,但是爲了不破壞源文件,我們可以將pascal_voc.py進行拷貝複製,從而進行修改。這裏我將pascal_voc.py文件拷貝成caltech.py文件:

cp pascal_voc.py caltech.py

下面我們對caltech.py文件進行修改,在這裏我會一一列舉每個我修改過的函數。這裏按照文件中的順序排列。。

init函數修改

這裏是原始的pascal_voc的init函數,在這裏,由於我們自己的數據集往往比voc的數據集要更簡單的一些,在作者額代碼裏面用了很多的路徑拼接,我們不用去迎合他的格式,將這些操作簡單化即可。

原始的函數
def __init__(self, image_set, year, devkit_path=None):
        imdb.__init__(self, 'voc_' + year + '_' + image_set)
        self._year = year
        self._image_set = image_set
        self._devkit_path = self._get_default_path() if devkit_path is None \
                            else devkit_path
        self._data_path = os.path.join(self._devkit_path, 'VOC' + self._year)
        self._classes = ('__background__', # always index 0
                         'aeroplane', 'bicycle', 'bird', 'boat',
                         'bottle', 'bus', 'car', 'cat', 'chair',
                         'cow', 'diningtable', 'dog', 'horse',
                         'motorbike', 'person', 'pottedplant',
                         'sheep', 'sofa', 'train', 'tvmonitor')
        self._class_to_ind = dict(zip(self.classes, xrange(self.num_classes)))
        self._image_ext = '.jpg'
        self._image_index = self._load_image_set_index()
        # Default to roidb handler
        self._roidb_handler = self.selective_search_roidb
        self._salt = str(uuid.uuid4())
        self._comp_id = 'comp4'

        # PASCAL specific config options
        self.config = {'cleanup'     : True,
                       'use_salt'    : True,
                       'use_diff'    : False,
                       'matlab_eval' : False,
                       'rpn_file'    : None,
                       'min_size'    : 2}

        assert os.path.exists(self._devkit_path), \
                'VOCdevkit path does not exist: {}'.format(self._devkit_path)
        assert os.path.exists(self._data_path), \
                'Path does not exist: {}'.format(self._data_path)
修改後的函數
def __init__(self, image_set, devkit_path=None):# initial function,把year刪除
        imdb.__init__(self, image_set) # imageset is train.txt or test.txt
        self._image_set = image_set
        self._devkit_path = devkit_path # devkit_path = '~/py-faster-rcnn/data/VOCdevkit'
        self._data_path = os.path.join(self._devkit_path, 'Caltech') # _data_path = '~/py-faster-rcnn/data/VOCdevkit/Caltech'
        self._classes = ('__background__', # always index 0
                         'person') # 我只有‘background’和‘person’兩類
        self._class_to_ind = dict(zip(self.classes, xrange(self.num_classes)))
        self._image_ext = '.jpg'
        self._image_index = self._load_image_set_index()
        # Default to roidb handler
        self._roidb_handler = self.selective_search_roidb
        self._salt = str(uuid.uuid4())
        self._comp_id = 'comp4'

        # PASCAL specific config options
        self.config = {'cleanup'     : True,
                       'use_salt'    : True,
                       'use_diff'    : True, # 我把use_diff改爲true了,因爲我的數據集xml文件中沒有<difficult>標籤,否則之後訓練會報錯
                       'matlab_eval' : False,
                       'rpn_file'    : None,
                       'min_size'    : 2}

        assert os.path.exists(self._devkit_path), \
                'VOCdevkit path does not exist: {}'.format(self._devkit_path)
        assert os.path.exists(self._data_path), \
                'Path does not exist: {}'.format(self._data_path)

_load_image_set_index函數修改

原始的函數
def _load_image_set_index(self):
      """
          Load the indexes listed in this dataset's image set file.
          """
      # Example path to image set file:
      # self._devkit_path + /VOCdevkit2007/VOC2007/ImageSets/Main/val.txt
      image_set_file = os.path.join(self._data_path, 'ImageSets', 'Main',
                                    self._image_set + '.txt')
      assert os.path.exists(image_set_file), \
      'Path does not exist: {}'.format(image_set_file)
      with open(image_set_file) as f:
          image_index = [x.strip() for x in f.readlines()]
          return image_index
修改後的函數
def _load_image_set_index(self):
        """
        Load the indexes listed in this dataset's image set file.
        """
        # Example path to image set file:
        # self._devkit_path + /VOCdevkit2007/VOC2007/ImageSets/Main/val.txt
        # /home/jk/py-faster-rcnn/data/VOCdevkit/Caltech/ImageSets/Main/train.txt
        image_set_file = os.path.join(self._data_path, 'ImageSets', 'Main',
                                      self._image_set + '.txt')
        assert os.path.exists(image_set_file), \
                'Path does not exist: {}'.format(image_set_file)
        with open(image_set_file) as f:
            image_index = [x.strip() for x in f.readlines()]
        return image_index

其實沒改,只是加了一行註釋,從而更好理解路徑問題。

_get_default_path函數修改

直接註釋即可

_load_pascal_annotation函數修改

原始的函數
def _load_pascal_annotation(self, index):
        """
        Load image and bounding boxes info from XML file in the PASCAL VOC
        format.
        """
        filename = os.path.join(self._data_path, 'Annotations', index + '.xml')
        tree = ET.parse(filename)
        objs = tree.findall('object')
        if not self.config['use_diff']:
            # Exclude the samples labeled as difficult
            non_diff_objs = [
                obj for obj in objs if int(obj.find('difficult').text) == 0]
            # if len(non_diff_objs) != len(objs):
            #     print 'Removed {} difficult objects'.format(
            #         len(objs) - len(non_diff_objs))
            objs = non_diff_objs
        num_objs = len(objs)

        boxes = np.zeros((num_objs, 4), dtype=np.uint16)
        gt_classes = np.zeros((num_objs), dtype=np.int32)
        overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32)
        # "Seg" area for pascal is just the box area
        seg_areas = np.zeros((num_objs), dtype=np.float32)

        # Load object bounding boxes into a data frame.
        for ix, obj in enumerate(objs):
            bbox = obj.find('bndbox')
            # Make pixel indexes 0-based
            x1 = float(bbox.find('xmin').text) - 1
            y1 = float(bbox.find('ymin').text) - 1
            x2 = float(bbox.find('xmax').text) - 1
            y2 = float(bbox.find('ymax').text) - 1
            cls = self._class_to_ind[obj.find('name').text.lower().strip()]
            boxes[ix, :] = [x1, y1, x2, y2]
            gt_classes[ix] = cls
            overlaps[ix, cls] = 1.0
            seg_areas[ix] = (x2 - x1 + 1) * (y2 - y1 + 1)

        overlaps = scipy.sparse.csr_matrix(overlaps)

        return {'boxes' : boxes,
                'gt_classes': gt_classes,
                'gt_overlaps' : overlaps,
                'flipped' : False,
                'seg_areas' : seg_areas}
修改後的函數
def _load_pascal_annotation(self, index):
        """
        Load image and bounding boxes info from XML file in the PASCAL VOC
        format.
        """
        filename = os.path.join(self._data_path, 'Annotations', index + '.xml')
        tree = ET.parse(filename)
        objs = tree.findall('object')
        if not self.config['use_diff']:
            # Exclude the samples labeled as difficult
            non_diff_objs = [
                obj for obj in objs if int(obj.find('difficult').text) == 0]
            # if len(non_diff_objs) != len(objs):
            #     print 'Removed {} difficult objects'.format(
            #         len(objs) - len(non_diff_objs))
            objs = non_diff_objs
        num_objs = len(objs)

        boxes = np.zeros((num_objs, 4), dtype=np.uint16)
        gt_classes = np.zeros((num_objs), dtype=np.int32)
        overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32)
        # "Seg" area for pascal is just the box area
        seg_areas = np.zeros((num_objs), dtype=np.float32)

        # Load object bounding boxes into a data frame.
        for ix, obj in enumerate(objs):
            bbox = obj.find('bndbox')
            # Make pixel indexes 0-based
            # 這裏我把‘-1’全部刪除掉了,防止有的數據是0開始,然後‘-1’導致變爲負數,產生AssertError錯誤
            x1 = float(bbox.find('xmin').text)
            y1 = float(bbox.find('ymin').text)
            x2 = float(bbox.find('xmax').text)
            y2 = float(bbox.find('ymax').text)
            cls = self._class_to_ind[obj.find('name').text.lower().strip()]
            boxes[ix, :] = [x1, y1, x2, y2]
            gt_classes[ix] = cls
            overlaps[ix, cls] = 1.0
            seg_areas[ix] = (x2 - x1 + 1) * (y2 - y1 + 1)

        overlaps = scipy.sparse.csr_matrix(overlaps)

        return {'boxes' : boxes,
                'gt_classes': gt_classes,
                'gt_overlaps' : overlaps,
                'flipped' : False,
                'seg_areas' : seg_areas}

main函數修改

原始的函數

if __name__ == '__main__':
    from datasets.pascal_voc import pascal_voc
    d = pascal_voc('trainval', '2007')
    res = d.roidb
    from IPython import embed; embed()

修改後的函數

if __name__ == '__main__':
    from datasets.caltech import caltech # 導入caltech包
    d = caltech('train', '/home/jk/py-faster-rcnn/data/VOCdevkit')#調用構造函數,傳入imageset和路徑
    res = d.roidb
    from IPython import embed; embed()

至此讀取接口修改完畢,該文件中的其他函數並未修改。

修改factory.py文件

當網絡訓練時會調用factory裏面的get方法獲得相應的imdb,首先在文件頭import 把pascal_voc改成caltech

在這個文件作者生成了多個數據庫的路徑,我們自己數據庫只要給定根路徑即可,修改主要有以下4個

  • 函數之後有兩個多級的for循環,也將其註釋
  • 直接定義devkit
  • 利用創建自己的訓練和測試的imdb set,這裏的name的格式爲caltech_{}

原始的代碼

# --------------------------------------------------------
# Fast R-CNN
# Copyright (c) 2015 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ross Girshick
# --------------------------------------------------------

"""Factory method for easily getting imdbs by name."""

__sets = {}

from datasets.pascal_voc import pascal_voc
from datasets.coco import coco
import numpy as np

# Set up voc_<year>_<split> using selective search "fast" mode
for year in ['2007', '2012']:
    for split in ['train', 'val', 'trainval', 'test']:
        name = 'voc_{}_{}'.format(year, split)
        __sets[name] = (lambda split=split, year=year: pascal_voc(split, year))

# Set up coco_2014_<split>
for year in ['2014']:
    for split in ['train', 'val', 'minival', 'valminusminival']:
        name = 'coco_{}_{}'.format(year, split)
        __sets[name] = (lambda split=split, year=year: coco(split, year))

# Set up coco_2015_<split>
for year in ['2015']:
    for split in ['test', 'test-dev']:
        name = 'coco_{}_{}'.format(year, split)
        __sets[name] = (lambda split=split, year=year: coco(split, year))

def get_imdb(name):
    """Get an imdb (image database) by name."""
    if not __sets.has_key(name):
        raise KeyError('Unknown dataset: {}'.format(name))
    return __sets[name]()

def list_imdbs():
    """List all registered imdbs."""
    return __sets.keys()

修改後的文件

# --------------------------------------------------------
# Fast R-CNN
# Copyright (c) 2015 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ross Girshick
# --------------------------------------------------------

"""Factory method for easily getting imdbs by name."""

__sets = {}

from datasets.caltech import caltech # 導入caltech包
#from datasets.coco import coco
#import numpy as np

devkit = '/home/jk/py-faster-rcnn/data/VOCdevkit'
# Set up voc_<year>_<split> using selective search "fast" mode
#for year in ['2007', '2012']:
#    for split in ['train', 'val', 'trainval', 'test']:
#        name = 'voc_{}_{}'.format(year, split)
#        __sets[name] = (lambda split=split, year=year: pascal_voc(split, year))

# Set up coco_2014_<split>
#for year in ['2014']:
#    for split in ['train', 'val', 'minival', 'valminusminival']:
#        name = 'coco_{}_{}'.format(year, split)
#        __sets[name] = (lambda split=split, year=year: coco(split, year))

# Set up coco_2015_<split>
#for year in ['2015']:
#    for split in ['test', 'test-dev']:
#        name = 'coco_{}_{}'.format(year, split)
#        __sets[name] = (lambda split=split, year=year: coco(split, year))

# Set up caltech_<split>
for split in ['train', 'test']:
    name = 'caltech_{}'.format(split)
    __sets[name] = (lambda imageset=split, devkit=devkit: caltech(imageset, devkit))

def get_imdb(name):
    """Get an imdb (image database) by name."""
    if not __sets.has_key(name):
        raise KeyError('Unknown dataset: {}'.format(name))
    return __sets[name]()

def list_imdbs():
    """List all registered imdbs."""
    return __sets.keys()

修改init.py文件

在行首添加上 from .caltech import caltech

總結

  • 座標的順序我再說一次,要左上右下,並且x1必須要小於x2,這個是基本,反了會在座標水平變換的時候會出錯,座標從0開始,如果已經是0,則不需要再-1。
  • 訓練圖像的大小不要太大,否則生成的OP也會太多,速度太慢,圖像樣本大小最好調整到500,600左右,然後再提取OP
  • 如果讀取並生成pkl文件之後,實際數據內容或者順序還有問題,記得要把data/cache/下面的pkl文件給刪掉。

參考博客

  1. Fast RCNN訓練自己的數據集 (2修改讀寫接口)
  2. Faster R-CNN教程

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章