第一部分:使用已有的模型實時監測目標
第一步:根據官網安裝darknet框架https://pjreddie.com/darknet/
第二步:
修改Makefie文件,由於我沒有用到GPU,所以將GPU設置爲0。按照我之前安裝的opencv步驟安裝opencv3.2.0,再將OPENCV設置爲1,如果沒有安裝opencv打開攝像頭的時候會報錯。
GPU=0
CUDNN=0
OPENCV=1
OPENMP=0
DEBUG=0
按照自己電腦的配置修改完Makefile文件之後,重新編譯
cd darknet
make
運行程序:
./darknet detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights
./darknet detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights -c 1
第二部分:訓練自己的圖片集,可以參考網站:https://karbo.online/dl/yolo_starter/
第一:下載所需要的訓練集,參考官網
在script/目錄下有有一個voc_label.py文件,內容如下,將此文件拷貝到darknet目錄下。
import xml.etree.ElementTree as ET
import pickle
import os
from os import listdir, getcwd
from os.path import join
sets=[('2012', 'train'), ('2012', 'val'), ('2007', 'train'), ('2007', 'val'), ('2007', 'test')]
classes = ["aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"]
def convert(size, box):
dw = 1./(size[0])
dh = 1./(size[1])
x = (box[0] + box[1])/2.0 - 1
y = (box[2] + box[3])/2.0 - 1
w = box[1] - box[0]
h = box[3] - box[2]
x = x*dw
w = w*dw
y = y*dh
h = h*dh
return (x,y,w,h)
def convert_annotation(year, image_id):
in_file = open('VOCdevkit/VOC%s/Annotations/%s.xml'%(year, image_id))
out_file = open('VOCdevkit/VOC%s/labels/%s.txt'%(year, image_id), 'w')
tree=ET.parse(in_file)
root = tree.getroot()
size = root.find('size')
w = int(size.find('width').text)
h = int(size.find('height').text)
for obj in root.iter('object'):
difficult = obj.find('difficult').text
cls = obj.find('name').text
if cls not in classes or int(difficult)==1:
continue
cls_id = classes.index(cls)
xmlbox = obj.find('bndbox')
b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text), float(xmlbox.find('ymax').text))
bb = convert((w,h), b)
out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')
wd = getcwd()
for year, image_set in sets:
if not os.path.exists('VOCdevkit/VOC%s/labels/'%(year)):
os.makedirs('VOCdevkit/VOC%s/labels/'%(year))
image_ids = open('VOCdevkit/VOC%s/ImageSets/Main/%s.txt'%(year, image_set)).read().strip().split()
list_file = open('%s_%s.txt'%(year, image_set), 'w')
for image_id in image_ids:
list_file.write('%s/VOCdevkit/VOC%s/JPEGImages/%s.jpg\n'%(wd, year, image_id))
convert_annotation(year, image_id)
list_file.close()
os.system("cat 2007_train.txt 2007_val.txt 2012_train.txt 2012_val.txt > train.txt")
os.system("cat 2007_train.txt 2007_val.txt 2007_test.txt 2012_train.txt 2012_val.txt > train.all.txt")
image_ids = open('VOCdevkit/VOC%s/ImageSets/Main/%s.txt'%(year, image_set)).read().strip().split()
list_file = open('%s_%s.txt'%(year, image_set), 'w')
for image_id in image_ids:
list_file.write('%s/VOCdevkit/VOC%s/JPEGImages/%s.jpg\n'%(wd, year, image_id))
convert_annotation(year, image_id)
讀取Main下的txt文件內容
mage_ids = open('VOCdevkit/VOC%s/ImageSets/Main/%s.txt'
000012
000017
000023
000026
000032
000033
000034
000035
000036
將訓練的圖片的目錄放在2007_train.txt文件中,list_file.write('%s/VOCdevkit/VOC%s/JPEGImages/%s.jpg\n'
/home/utryjc/darknet/VOCdevkit/VOC2007/JPEGImages/000012.jpg
/home/utryjc/darknet/VOCdevkit/VOC2007/JPEGImages/000017.jpg
/home/utryjc/darknet/VOCdevkit/VOC2007/JPEGImages/000023.jpg
/home/utryjc/darknet/VOCdevkit/VOC2007/JPEGImages/000026.jpg
/home/utryjc/darknet/VOCdevkit/VOC2007/JPEGImages/000032.jpg
/home/utryjc/darknet/VOCdevkit/VOC2007/JPEGImages/000033.jpg
/home/utryjc/darknet/VOCdevkit/VOC2007/JPEGImages/000034.jpg
/home/utryjc/darknet/VOCdevkit/VOC2007/JPEGImages/000035.jpg
/home/utryjc/darknet/VOCdevkit/VOC2007/JPEGImages/000036.jpg
/home/utryjc/darknet/VOCdevkit/VOC2007/JPEGImages/000042.jpg
/home/utryjc/darknet/VOCdevkit/VOC2007/JPEGImages/000044.jpg
/home/utryjc/darknet/VOCdevkit/VOC2007/JPEGImages/000047.jpg
xml文件中記錄了圖片的標註信息,詳細的標註的意義可以參見該文:https://arleyzhang.github.io/articles/1dc20586/
in_file = open('VOCdevkit/VOC%s/Annotations/%s.xml'%(year, image_id))
<annotation>
<folder>VOC2007</folder>
<filename>000012.jpg</filename>
<source>
<database>The VOC2007 Database</database>
<annotation>PASCAL VOC2007</annotation>

<flickrid>207539885</flickrid>
</source>
<owner>
<flickrid>KevBow</flickrid>
<name>?</name>
</owner>
<size>
<width>500</width>
<height>333</height>
<depth>3</depth>
</size>
<segmented>0</segmented>
<object>
<name>car</name>
<pose>Rear</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>156</xmin>
<ymin>97</ymin>
<xmax>351</xmax>
<ymax>270</ymax>
</bndbox>
</object>
</annotation>
out_file = open('VOCdevkit/VOC%s/labels/%s.txt'%(year, image_id), 'w') out_file文件中記錄的數據如下
6 0.505 0.548048048048 0.39 0.51951951952
第二步:準備權重文件
wget https://pjreddie.com/media/files/darknet53.conv.74
第三步:修改配置文件,文件目錄:cfg/voc.data
1 classes= 20
2 train = <path-to-voc>/train.txt
3 valid = <path-to-voc>2007_test.txt
4 names = data/voc.names
5 backup = backup
voc.names文件的內容如下所示:
aeroplane
bicycle
bird
boat
bottle
bus
car
cat
chair
cow
diningtable
dog
horse
motorbike
person
pottedplant
sheep
sofa
train
tvmonitor
第四步:訓練模型
下載預訓練卷積權重:wget https://pjreddie.com/media/files/darknet53.conv.74
./darknet detector train cfg/voc.data cfg/yolov3-voc.cfg darknet53.conv.74
不知是什麼原因,可能是我的筆記本沒有帶有GPU,所以這裏進行的非常慢,我就沒有等下去了!!!
參考鏈接:https://blog.csdn.net/lilai619/article/details/79695109