yolov3 計算coco、voc數據集map

計算voc數據集MAP

1、首先下載voc數據集

wget https://pjreddie.com/media/files/VOCtrainval_11-May-2012.tar
wget https://pjreddie.com/media/files/VOCtrainval_06-Nov-2007.tar
wget https://pjreddie.com/media/files/VOCtest_06-Nov-2007.tar
tar xf VOCtrainval_11-May-2012.tar
tar xf VOCtrainval_06-Nov-2007.tar
tar xf VOCtest_06-Nov-2007.tar

會在當前路徑下生成VOCdevkit文件夾

2、將voc的xml標籤格式轉爲coco的txt格式

yolov3中scripts中的_label.py腳本

import xml.etree.ElementTree as ET
import pickle
import os
from os import listdir, getcwd
from os.path import join

sets=[('2012', 'train'), ('2012', 'val'), ('2007', 'train'), ('2007', 'val'), ('2007', 'test')]

classes = ["aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"]


def convert(size, box):
    dw = 1./(size[0])
    dh = 1./(size[1])
    x = (box[0] + box[1])/2.0 - 1
    y = (box[2] + box[3])/2.0 - 1
    w = box[1] - box[0]
    h = box[3] - box[2]
    x = x*dw
    w = w*dw
    y = y*dh
    h = h*dh
    return (x,y,w,h)

def convert_annotation(year, image_id):
    in_file = open('VOCdevkit/VOC%s/Annotations/%s.xml'%(year, image_id))
    out_file = open('VOCdevkit/VOC%s/labels/%s.txt'%(year, image_id), 'w')
    tree=ET.parse(in_file)
    root = tree.getroot()
    size = root.find('size')
    w = int(size.find('width').text)
    h = int(size.find('height').text)

    for obj in root.iter('object'):
        difficult = obj.find('difficult').text
        cls = obj.find('name').text
        if cls not in classes or int(difficult)==1:
            continue
        cls_id = classes.index(cls)
        xmlbox = obj.find('bndbox')
        b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text), float(xmlbox.find('ymax').text))
        bb = convert((w,h), b)
        out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')

wd = getcwd()

for year, image_set in sets:
    if not os.path.exists('VOCdevkit/VOC%s/labels/'%(year)):
        os.makedirs('VOCdevkit/VOC%s/labels/'%(year))
    image_ids = open('VOCdevkit/VOC%s/ImageSets/Main/%s.txt'%(year, image_set)).read().strip().split()
    list_file = open('%s_%s.txt'%(year, image_set), 'w')
    for image_id in image_ids:
        list_file.write('%s/VOCdevkit/VOC%s/JPEGImages/%s.jpg\n'%(wd, year, image_id))
        convert_annotation(year, image_id)
    list_file.close()

os.system("cat 2007_train.txt 2007_val.txt 2012_train.txt 2012_val.txt > train.txt")
os.system("cat 2007_train.txt 2007_val.txt 2007_test.txt 2012_train.txt 2012_val.txt > train.all.txt")

3、運行腳本之後,ls查看當前路徑,有以下文件:

ls
2007_test.txt   VOCdevkit
2007_train.txt  voc_label.py
2007_val.txt    VOCtest_06-Nov-2007.tar
2012_train.txt  VOCtrainval_06-Nov-2007.tar
2012_val.txt    VOCtrainval_11-May-2012.tar

4、將2007與2012數據集合並

cat 2007_train.txt 2007_val.txt 2012_*.txt > train.txt

2007+2012作爲訓練集train,2007test作爲測試集2007_test。

5、修改voc.data文件

  1 classes= 20
  2 train  = <path-to-voc>/train.txt
  3 valid  = <path-to-voc>2007_test.txt
  4 names = data/voc.names
  5 backup = backup

6、在測試集上測試

./darknet detector valid cfg/voc.data cfg/yolov3-voc.cfg backup/yolov3-voc_final.weights -out "" -gpu 0 -thresh .5

在darknet/results目錄下生成20個類別txt文件,包含測試結果。

7、計算類別map

注意修改文件路徑 ,運行腳本,輸出單個類別ap及總的map。 並在當前路徑下生成annots.pkl文件。若後續需要重新運行腳本計算map,要先將annots.pkl文件刪除。

#coding=utf-8
from voc_eval import voc_eval

import os

current_path = os.getcwd()
results_path = current_path+"/results"
sub_files = os.listdir(results_path)

mAP = []
for i in range(len(sub_files)):
    class_name = sub_files[i].split(".txt")[0]
    rec, prec, ap = voc_eval('/.../darknet-master/results/{}.txt', '/.../darknet-master/vocdata/VOCdevkit/VOC2007/Annotations/{}.xml', '/.../darknet-master/vocdata/VOCdevkit/VOC2007/ImageSets/Main/test.txt', class_name, '.')
    print("{} :\t {} ".format(class_name, ap))
    mAP.append(ap)

mAP = tuple(mAP)

print("***************************")
print("mAP :\t {}".format( float( sum(mAP)/len(mAP)) )) 

計算單個類別的ap值

# encoding: utf-8
from voc_eval import voc_eval 
rec,prec,ap=voc_eval('/.../darknet-master/results/{}.txt', '/.../VOCdevkit/VOC2007/Annotations/{}.xml', '/.../VOCdevkit/VOC2007/ImageSets/Main/test.txt', 'person', '.')
print('rec',rec)
print('prec',prec)
print('ap',ap)

參考鏈接:

https://blog.csdn.net/amusi1994/article/details/81564504

https://pjreddie.com/darknet/yolo/

計算coco數據集MAP

1、首先下載coco數據集

darknet-master目錄下打開終端

cp scripts/get_coco_dataset.sh data
cd data
bash get_coco_dataset.sh

在darknet-master/data下會生成coco文件夾,包含所有數據。大概29G

2、修改cfg/coco.data文件

  1 classes= 80
  2 train  = data/coco/trainvalno5k.txt
  3 valid  = data/coco/5k.txt
  4 names = data/coco.names
  5 backup = backup

3、測試,生成json文件

./darknet detector valid cfg/coco.data cfg/yolov3.cfg yolov3.weights

測試5000張圖片,在results文件夾下生成coco_results.json文件。

4、運行python腳本計算map

#-*- coding:utf-8 -*-
import matplotlib.pyplot as plt 
from pycocotools.coco import COCO 
from pycocotools.cocoeval import COCOeval 
import numpy as np 
import skimage.io as io 
import pylab,json 
pylab.rcParams['figure.figsize'] = (10.0, 8.0) 
def get_img_id(file_name): 
    ls = [] 
    myset = [] 
    annos = json.load(open(file_name, 'r')) 
    for anno in annos: 
      ls.append(anno['image_id']) 
    myset = {}.fromkeys(ls).keys() 
    return myset 
if __name__ == '__main__': 
    annType = ['segm', 'bbox', 'keypoints']#set iouType to 'segm', 'bbox' or 'keypoints'
    annType = annType[1] # specify type here
    cocoGt_file = '/.../data/coco2014/annotations/instances_val2014.json'
    cocoGt = COCO(cocoGt_file)#取得標註集中coco json對象
    cocoDt_file = '/.../darknet/results/coco_results.json'
    imgIds = get_img_id(cocoDt_file) 
    print len(imgIds)
    cocoDt = cocoGt.loadRes(cocoDt_file)#取得結果集中image json對象
    imgIds = sorted(imgIds)#按順序排列coco標註集image_id
    imgIds = imgIds[0:5000]#標註集中的image數據
    cocoEval = COCOeval(cocoGt, cocoDt, annType) 
    cocoEval.params.imgIds = imgIds#參數設置
    cocoEval.evaluate()#評價
    cocoEval.accumulate()#積累
    cocoEval.summarize()#總結

將腳本中的路徑改稱自己數據的對應路徑。

報錯:ImportError: No module named pycocotools.coco
解決:即沒有安裝該包, pip install pycocotools即可。

報錯:ImportError: No module named skimage.io
解決:pip install scikit-image

5、結果  608x608

$ python calcocomap.py    
loading annotations into memory...
Done (t=3.49s)
creating index...
index created!
4991
Loading and preparing results...
DONE (t=2.42s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=29.24s).
Accumulating evaluation results...
DONE (t=3.07s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.334
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.585
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.345
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.194
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.365
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.439
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.291
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.446
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.470
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.304
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.502
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.593

 

https://blog.csdn.net/qq_33074561/article/details/81944897

 

參考鏈接:

https://blog.csdn.net/xidaoliang/article/details/88397280

https://blog.csdn.net/qq_33074561/article/details/81980494

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章