Nvidia jetson nano | Tensorflow-gpu | TensorFlow object detection API | mobilnet-ssd | 訓練 自己的數據集

參考自:

      https://www.cnblogs.com/leviatan/p/10740105.html

      https://www.cnblogs.com/gezhuangzhuang/p/10613468.html

關於如何安裝tensorflow-gpu參考我這篇博客

      https://blog.csdn.net/ourkix/article/details/103577082

 

目錄

下載文件

依賴安裝 如按之前博客來的話 應該已經安裝好了

安裝 object_detection 庫

設置 PYTHONPATH

測試 object_detection 庫是否安裝成功

訓練自己的數據集

1. 在自己的voc數據格式文件夾內,新建 train_test_split.py 把xml文件數據集分爲了train、test、validation三部分,並存儲在Annotations文件夾中,訓練驗證集佔80%,測試集佔20%。訓練集佔訓練驗證集的80%。代碼如下:

2. 把xml轉換成csv文件,xml_to_csv.py 將生成的csv文件放在 object_detection/data/

3. 生成tfrecord文件,在research目錄下建立generate_tfrecord.py

 

訓練

1. 在object_detection/data文件夾下創建標籤分類的配置文件(labelmap.pbtxt),需要檢測幾種目標,將創建幾個id,代碼如下:

2. 配置管道配置文件,找到object_detection/samples/config/ssd_mobilenet_v1_coco.config,複製到data文件夾下。修改後的代碼如下:

3.下載預訓練模型(用我上傳的文件的話,已經在object_detection/ssd_model/ssd_mobilenet目錄下了)

4. 開始訓練(這個train.py 文件可能就在object_detection目錄下 也可能在object_detection/legacy下)

5.訓練完成後,運行 export_inference_graph.py 腳本將訓練出的模型固化成 TensorFlow 的 .pb 模型,其中 trained_checkpoint_prefix 要設置成 model.ckpt-[step],其中 step 要與訓練迭代次數相同

6.測試模型(在object_detection目錄下創建文件seahorse_ssd_detect.py)


 

 

下載文件

下載地址: https://github.com/tensorflow/models

也可以使用我上傳的裏面有數據集和預訓練文件 和 測試圖片,文件有點大分卷下載了,要都下載下來一起解壓

下載地址:https://download.csdn.net/download/ourkix/12068490

下載地址:https://download.csdn.net/download/ourkix/12068504

 

下載後得到一個 models-master.zip 文件,解壓後移動到 (關於如何在文件查看其中看到隱藏的文件 Ctrl + H 快捷鍵)

/home/nvidia/.local/lib/python3.6/site-packages/tensorflow

 文件夾下,並重命名爲 models

 

如果用我上傳的,下載解壓後是個models文件夾,裏面還有個models,進去吧裏面的models複製到

/home/nvidia/.local/lib/python3.6/site-packages/tensorflow

 文件夾下

依賴安裝 如按之前博客來的話 應該已經安裝好了

python3 -m pip install pillow --user
python3 -m pip install lxml --user
python3 -m pip install matplotlib --user
python3 -m pip install pandas --user

 

這裏查看自己是否有安裝 protobuf

protoc --version

出現

libprotoc 3.0.0

 代表有安裝

如沒安裝

sudo apt-get install -y python3-protobuf
#也可以用pip
python3 -m pip install protobuf --user

進入 models/research/ 目錄,並編譯 protobuf (這裏可能會報錯 沒有pandas 庫 安裝就是了)

cd /home/nvidia/.local/lib/python3.6/site-packages/tensorflow/models/research
protoc object_detection/protos/*.proto --python_out=.

安裝 object_detection 庫

python3 setup.py build
python3 setup.py install

設置 PYTHONPATH

編輯  .bashrc文件

sudo gedit ~/.bashrc

 最後加上

export PYTHONPATH=$PYTHONPATH:/home/nvidia/.local/lib/python3.6/site-packages/tensorflow/models/research
export PYTHONPATH=$PYTHONPATH:/home/nvidia/.local/lib/python3.6/site-packages/tensorflow/models/research/slim

保存,使環境生效

source ~/.bashrc

測試 object_detection 庫是否安裝成功

cd /home/nvidia/.local/lib/python3.6/site-packages/tensorflow/models/research
python3 object_detection/builders/model_builder_test.py

運行測試目標檢測腳本測試 在object_detection目錄下有個 object-detection-turorial.ipynb 這裏不用jupyter-notebook,改用python,更方便。

新建一個文件 object-detection-turorial.py

touch object-detection-turorial.py

編輯,寫入

import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
import matplotlib

from distutils.version import StrictVersion
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image

# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops

if StrictVersion(tf.__version__) < StrictVersion('1.9.0'):
  raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!')



import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile

from distutils.version import StrictVersion
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image

# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops

if StrictVersion(tf.__version__) < StrictVersion('1.9.0'):
  raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!')



from utils import label_map_util

from utils import visualization_utils as vis_util


global output_num
global output_img_dic

matplotlib.use('TkAgg')

# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'

# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb'

# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')

print(PATH_TO_LABELS)


# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ]

# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)

output_num = 1
output_img_dic = r'\output_images'










opener = urllib.request.URLopener()
print("--\n")
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
print("--\n")
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
  file_name = os.path.basename(file.name)
  if 'frozen_inference_graph.pb' in file_name:
    tar_file.extract(file, os.getcwd())

print("--\n")


detection_graph = tf.Graph()
with detection_graph.as_default():
  od_graph_def = tf.compat.v1.GraphDef()
  with tf.io.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
    serialized_graph = fid.read()
    od_graph_def.ParseFromString(serialized_graph)
    tf.import_graph_def(od_graph_def, name='')

print("--\n")

category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)

print("--\n")

def load_image_into_numpy_array(image):
  (im_width, im_height) = image.size
  return np.array(image.getdata()).reshape(
      (im_height, im_width, 3)).astype(np.uint8)






def run_inference_for_single_image(image, graph):
  with graph.as_default():
    with tf.compat.v1.Session() as sess:
      # Get handles to input and output tensors
      ops = tf.compat.v1.get_default_graph().get_operations()
      all_tensor_names = {output.name for op in ops for output in op.outputs}
      tensor_dict = {}
      for key in [
          'num_detections', 'detection_boxes', 'detection_scores',
          'detection_classes', 'detection_masks'
      ]:
        tensor_name = key + ':0'
        if tensor_name in all_tensor_names:
          tensor_dict[key] = tf.compat.v1.get_default_graph().get_tensor_by_name(
              tensor_name)
      if 'detection_masks' in tensor_dict:
        # The following processing is only for single image
        detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
        detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
        # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
        real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
        detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
        detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
        detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
            detection_masks, detection_boxes, image.shape[0], image.shape[1])
        detection_masks_reframed = tf.cast(
            tf.greater(detection_masks_reframed, 0.5), tf.uint8)
        # Follow the convention by adding back the batch dimension
        tensor_dict['detection_masks'] = tf.expand_dims(
            detection_masks_reframed, 0)
      image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')

      # Run inference
      output_dict = sess.run(tensor_dict,
                             feed_dict={image_tensor: np.expand_dims(image, 0)})

      # all outputs are float32 numpy arrays, so convert types as appropriate
      output_dict['num_detections'] = int(output_dict['num_detections'][0])
      output_dict['detection_classes'] = output_dict[
          'detection_classes'][0].astype(np.uint8)
      output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
      output_dict['detection_scores'] = output_dict['detection_scores'][0]
      if 'detection_masks' in output_dict:
        output_dict['detection_masks'] = output_dict['detection_masks'][0]
  return output_dict




for image_path in TEST_IMAGE_PATHS:
  image = Image.open(image_path)
  # the array based representation of the image will be used later in order to prepare the
  # result image with boxes and labels on it.
  image_np = load_image_into_numpy_array(image)
  # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
  image_np_expanded = np.expand_dims(image_np, axis=0)
  # Actual detection.
  output_dict = run_inference_for_single_image(image_np, detection_graph)
  # Visualization of the results of a detection.
  vis_util.visualize_boxes_and_labels_on_image_array(
      image_np,
      output_dict['detection_boxes'],
      output_dict['detection_classes'],
      output_dict['detection_scores'],
      category_index,
      instance_masks=output_dict.get('detection_masks'),
      use_normalized_coordinates=True,
      line_thickness=8)
  plt.figure(figsize=IMAGE_SIZE)
  print(1,image_np)
  plt.imshow(image_np)
  plt.show()
  
  if not os.path.exists(output_img_dic):
      os.mkdir(output_img_dic)
  output_img_path = os.path.join(output_img_dic,str(output_num)+".png")
  plt.savefig(output_img_path)

保存,運行

python3 object-detection-turorial.py

等待運行,nano運行比較久,要下載文件什麼的,等個2-3分鐘。

 

訓練自己的數據集

    生成tfrecord文件

VOC數據集目錄結構是這樣的

我在object_detection目錄下建立了ssd_model目錄,裏面放了VOCdeckit,我會提供整個models文件夾內容(包括預訓練模型,海馬數據集,測試數據),你們可以按我的來

|--VOCdevkit

           |--VOC2007

                    |--Annotations

                    |--ImageSets

                              |--Layout

                              |--Main

                              |--Segmentation

                    |--JPEGImages

1. 在自己的voc數據格式文件夾內,新建 train_test_split.py 把xml文件數據集分爲了train、test、validation三部分,並存儲在Annotations文件夾中,訓練驗證集佔80%,測試集佔20%。訓練集佔訓練驗證集的80%。代碼如下:

import os  
import random  
import time  
import shutil

xmlfilepath=r'./Annotations'  
saveBasePath=r"./Annotations"

trainval_percent=0.8  
train_percent=0.8  
total_xml = os.listdir(xmlfilepath)  
num=len(total_xml)  
list=range(num)  
tv=int(num*trainval_percent)  
tr=int(tv*train_percent)  
trainval= random.sample(list,tv)  
train=random.sample(trainval,tr)  
print("train and val size",tv)  
print("train size",tr) 

start = time.time()

test_num=0  
val_num=0  
train_num=0  

for i in list:  
    name=total_xml[i]
    if i in trainval:  #train and val set 
        if i in train: 
            directory="train"  
            train_num += 1  
            xml_path = os.path.join(os.getcwd(), 'Annotations/{}'.format(directory))  
            if(not os.path.exists(xml_path)):  
                os.mkdir(xml_path)  
            filePath=os.path.join(xmlfilepath,name)  
            newfile=os.path.join(saveBasePath,os.path.join(directory,name))  
            shutil.copyfile(filePath, newfile)
        else:
            directory="validation"  
            xml_path = os.path.join(os.getcwd(), 'Annotations/{}'.format(directory))  
            if(not os.path.exists(xml_path)):  
                os.mkdir(xml_path)  
            val_num += 1  
            filePath=os.path.join(xmlfilepath,name)   
            newfile=os.path.join(saveBasePath,os.path.join(directory,name))  
            shutil.copyfile(filePath, newfile)

    else:
        directory="test"  
        xml_path = os.path.join(os.getcwd(), 'Annotations/{}'.format(directory))  
        if(not os.path.exists(xml_path)):  
                os.mkdir(xml_path)  
        test_num += 1  
        filePath=os.path.join(xmlfilepath,name)  
        newfile=os.path.join(saveBasePath,os.path.join(directory,name))  
        shutil.copyfile(filePath, newfile)

end = time.time()  
seconds=end-start  
print("train total : "+str(train_num))  
print("validation total : "+str(val_num))  
print("test total : "+str(test_num))  
total_num=train_num+val_num+test_num  
print("total number : "+str(total_num))  
print( "Time taken : {0} seconds".format(seconds))

2. 把xml轉換成csv文件,xml_to_csv.py 將生成的csv文件放在 object_detection/data/

import os  
import glob  
import pandas as pd  
import xml.etree.ElementTree as ET 

def xml_to_csv(path):  
    xml_list = []  
    for xml_file in glob.glob(path + '/*.xml'):  
        tree = ET.parse(xml_file)  
        root = tree.getroot()
        
        print(root.find('filename').text)  
        for member in root.findall('object'): 
            value = (root.find('filename').text,  
                int(root.find('size')[0].text),   #width  
                int(root.find('size')[1].text),   #height  
                member[0].text,  
                int(member[4][0].text),  
                int(float(member[4][1].text)),  
                int(member[4][2].text),  
                int(member[4][3].text)  
                )  
            xml_list.append(value)
    column_name = ['filename', 'width', 'height', 'class', 'xmin', 'ymin', 'xmax', 'ymax']
    xml_df = pd.DataFrame(xml_list, columns=column_name)  
    return xml_df      

def main():  
    for directory in ['train','test','validation']:  
        xml_path = os.path.join(os.getcwd(), 'Annotations/{}'.format(directory))  

        xml_df = xml_to_csv(xml_path)  
        # xml_df.to_csv('whsyxt.csv', index=None)  
        xml_df.to_csv('/home/nvidia/.local/lib/python3.6/site-packages/tensorflow/models/research/object_detection/data/seahorse_{}_labels.csv'.format(directory), index=None)  
        print('Successfully converted xml to csv.')

main()  

3. 生成tfrecord文件,在research目錄下建立generate_tfrecord.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

#Usage:
  # From tensorflow/models/
  # Create train data:
  #python generate_tfrecord.py --csv_input=data/tv_vehicle_labels.csv  --output_path=train.record
  # Create test data:
  #python generate_tfrecord.py --csv_input=data/test_labels.csv  --output_path=test.record



import os
import io
import pandas as pd
import tensorflow as tf

from PIL import Image
from object_detection.utils import dataset_util
from collections import namedtuple, OrderedDict

os.chdir('/home/nvidia/.local/lib/python3.6/site-packages/tensorflow/models/research/')

flags = tf.app.flags
flags.DEFINE_string('csv_input', '', 'Path to the CSV input')
flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
FLAGS = flags.FLAGS


# TO-DO replace this with label map
def class_text_to_int(row_label):
        # 你的所有類別
    if row_label == 'seahorse':
            return 1
    else:
        return None

def split(df, group):
    data = namedtuple('data', ['filename', 'object'])
    gb = df.groupby(group)
    return [data(filename, gb.get_group(x)) for filename, x in zip(gb.groups.keys(), gb.groups)]


def create_tf_example(group, path):
    with tf.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid:
        encoded_jpg = fid.read()
    encoded_jpg_io = io.BytesIO(encoded_jpg)
    image = Image.open(encoded_jpg_io)
    width, height = image.size

    filename = group.filename.encode('utf8')
    image_format = b'jpg'
    xmins = []
    xmaxs = []
    ymins = []
    ymaxs = []
    classes_text = []
    classes = []

    for index, row in group.object.iterrows():
        xmins.append(row['xmin'] / width)
        xmaxs.append(row['xmax'] / width)
        ymins.append(row['ymin'] / height)
        ymaxs.append(row['ymax'] / height)
        classes_text.append(row['class'].encode('utf8'))
        classes.append(class_text_to_int(row['class']))

    tf_example = tf.train.Example(features=tf.train.Features(feature={
        'image/height': dataset_util.int64_feature(height),
        'image/width': dataset_util.int64_feature(width),
        'image/filename': dataset_util.bytes_feature(filename),
        'image/source_id': dataset_util.bytes_feature(filename),
        'image/encoded': dataset_util.bytes_feature(encoded_jpg),
        'image/format': dataset_util.bytes_feature(image_format),
        'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
        'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
        'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
        'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
        'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
        'image/object/class/label': dataset_util.int64_list_feature(classes),
    }))
    return tf_example


def main(_):
    writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
    path = os.path.join(os.getcwd(), 'object_detection/ssd_model/VOCdevkit/VOC2007/JPEGImages/')
    examples = pd.read_csv(FLAGS.csv_input)
    grouped = split(examples, 'filename')
    num = 0
    for group in grouped:
        num += 1
        tf_example = create_tf_example(group, path)
        writer.write(tf_example.SerializeToString())
        if (num % 100 == 0):    # 每完成100個轉換,打印一次
            print(num)

    writer.close()
    output_path = os.path.join(os.getcwd(), FLAGS.output_path)
    print('Successfully created the TFRecords: {}'.format(output_path))


if __name__ == '__main__':
    tf.app.run()

主要是在 row_label 這裏要添加上你標註的類別,字符串 row_label 應於labelImg中標註的名稱相同。同樣 path 爲圖片的路徑。

執行生成前要去改一下cvs文件,把3個文件裏面的jpeg改成jpg,這裏是我圖片後綴問題,不改會報錯。

cd /home/nvidia/.local/lib/python3.6/site-packages/tensorflow/models/research

python3 generate_tfrecord.py --csv_input=object_detection/data/seahorse_train_labels.csv --output_path=object_detection/data/seahorse_train.tfrecord

generate_tfrecord.py 需要在research目錄下,也就是object_detection的上級目錄,因爲在腳本中使用了 object_detection.utils,如果在 object_detection 下執行命令會報錯(No module named object_detection)。

類似的,我們可以輸入如下命令,將驗證集和測試集也轉換爲tfrecord格式。

python3 generate_tfrecord.py --csv_input=object_detection/data/seahorse_validation_labels.csv --output_path=object_detection/data/seahorse_validation.tfrecord
python3 generate_tfrecord.py --csv_input=object_detection/data/seahorse_test_labels.csv --output_path=object_detection/data/seahorse_test.tfrecord

 

訓練

1. 在object_detection/data文件夾下創建標籤分類的配置文件(labelmap.pbtxt),需要檢測幾種目標,將創建幾個id,代碼如下:

item {
  id: 1    # id 從1開始編號
  name: 'seahorse'
}

2. 配置管道配置文件,找到object_detection/samples/config/ssd_mobilenet_v1_coco.config,複製到data文件夾下。修改後的代碼如下:

# SSD with Mobilenet v1 configuration for MSCOCO Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.

model {
  ssd {
#修改,分類的總數
    num_classes: 2
    box_coder {
      faster_rcnn_box_coder {
        y_scale: 10.0
        x_scale: 10.0
        height_scale: 5.0
        width_scale: 5.0
      }
    }
    matcher {
      argmax_matcher {
        matched_threshold: 0.5
        unmatched_threshold: 0.5
        ignore_thresholds: false
        negatives_lower_than_unmatched: true
        force_match_for_each_row: true
      }
    }
    similarity_calculator {
      iou_similarity {
      }
    }
    anchor_generator {
      ssd_anchor_generator {
        num_layers: 6
        min_scale: 0.2
        max_scale: 0.95
        aspect_ratios: 1.0
        aspect_ratios: 2.0
        aspect_ratios: 0.5
        aspect_ratios: 3.0
        aspect_ratios: 0.3333
      }
    }
    image_resizer {
      fixed_shape_resizer {
        height: 300
        width: 300
      }
    }
    box_predictor {
      convolutional_box_predictor {
        min_depth: 0
        max_depth: 0
        num_layers_before_predictor: 0
        use_dropout: false
        dropout_keep_probability: 0.8
        kernel_size: 1
        box_code_size: 4
        apply_sigmoid_to_scores: false
        conv_hyperparams {
          activation: RELU_6,
          regularizer {
            l2_regularizer {
              weight: 0.00004
            }
          }
          initializer {
            truncated_normal_initializer {
              stddev: 0.03
              mean: 0.0
            }
          }
          batch_norm {
            train: true,
            scale: true,
            center: true,
            decay: 0.9997,
            epsilon: 0.001,
          }
        }
      }
    }
    feature_extractor {
      type: 'ssd_mobilenet_v1'
      min_depth: 16
      depth_multiplier: 1.0
      conv_hyperparams {
        activation: RELU_6,
        regularizer {
          l2_regularizer {
            weight: 0.00004
          }
        }
        initializer {
          truncated_normal_initializer {
            stddev: 0.03
            mean: 0.0
          }
        }
        batch_norm {
          train: true,
          scale: true,
          center: true,
          decay: 0.9997,
          epsilon: 0.001,
        }
      }
    }
    loss {
      classification_loss {
        weighted_sigmoid {
        }
      }
      localization_loss {
        weighted_smooth_l1 {
        }
      }
      hard_example_miner {
        num_hard_examples: 3000
        iou_threshold: 0.99
        loss_type: CLASSIFICATION
        max_negatives_per_positive: 3
        min_negatives_per_image: 0
      }
      classification_weight: 1.0
      localization_weight: 1.0
    }
    normalize_loss_by_num_matches: true
    post_processing {
      batch_non_max_suppression {
        score_threshold: 1e-8
        iou_threshold: 0.6
        max_detections_per_class: 100
        max_total_detections: 100
      }
      score_converter: SIGMOID
    }
  }
}

train_config: {
#修改,批次大小,nano的話在圖形界面下跑4會出現卡頓OOM,內存不足,2的話勉強可以跑。可以在不啓動圖形界面跑會好些
  batch_size: 2
  optimizer {
    rms_prop_optimizer: {
      learning_rate: {
        exponential_decay_learning_rate {
#修改,初始學習率
          initial_learning_rate: 0.0001
          decay_steps: 800720
          decay_factor: 0.95
        }
      }
      momentum_optimizer_value: 0.9
      decay: 0.9
      epsilon: 1.0
    }
  }
#修改,預訓練模型
  fine_tune_checkpoint: "ssd_model/ssd_mobilenet/model.ckpt"
  from_detection_checkpoint: true
  # Note: The below line limits the training process to 200K steps, which we
  # empirically found to be sufficient enough to train the pets dataset. This
  # effectively bypasses the learning rate schedule (the learning rate will
  # never decay). Remove the below line to train indefinitely.
#修改,迭代總次數
  num_steps: 5000
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
  data_augmentation_options {
    ssd_random_crop {
    }
  }
}

train_input_reader: {
  tf_record_input_reader {
#修改,訓練數據 按理這裏是seahorse_train.tfrecord
    input_path: "data/seahorse.tfrecord"
  }
#修改,labelmap路徑
  label_map_path: "data/labelmap.pbtxt"
}

eval_config: {
  num_examples: 8000
  # Note: The below line limits the evaluation process to 10 evaluations.
  # Remove the below line to evaluate indefinitely.
  max_evals: 10
}

eval_input_reader: {
  tf_record_input_reader {
#修改,訓練驗證數據
    input_path: "data/seahorse_validation.tfrecord"
  }
#修改,labelmap路徑
  label_map_path: "data/labelmap.pbtxt"
  shuffle: false
  num_readers: 1
}

3.下載預訓練模型(用我上傳的文件的話,已經在object_detection/ssd_model/ssd_mobilenet目錄下了)

下載 ssd_mobilenet 至 ssd_model/ 目錄下,解壓並重命名爲 ssd_mobilenet

ssd_mobilenet: http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_11_06_2017.tar.gz

tar zxvf ssd_mobilenet_v1_coco_11_06_2017.tar.gz
mv ssd_mobilenet_v1_coco_11_06_2017 ssd_mobilenet

將 ssd_mobilenet_v1_coco.config 中 fine_tune_checkpoint 修改爲如下格式的路徑(上面已經改好)

fine_tune_checkpoint: "ssd_model/ssd_mobilenet/model.ckpt"

 

關閉圖形界面,訓練時再關閉(看你的平臺情況而定,訓練不了就關閉)ps:我nano在圖形界面勉強可以訓練

# ubuntu關閉圖形用戶界面
sudo systemctl set-default multi-user.target
sudo reboot
 
# ubuntu啓用圖形用戶界面
sudo systemctl set-default graphical.target

4. 開始訓練(這個train.py 文件可能就在object_detection目錄下 也可能在object_detection/legacy下)

python3 legacy/train.py --logtostderr --train_dir=training/ --pipeline_config_path=data/ssd_mobilenet_v1_coco.config

5.訓練完成後,運行 export_inference_graph.py 腳本將訓練出的模型固化成 TensorFlow 的 .pb 模型,其中 trained_checkpoint_prefix 要設置成 model.ckpt-[step],其中 step 要與訓練迭代次數相同

python3 ./object_detection/export_inference_graph.py --input_type image_tensor --pipeline_config_path ./object_detection/ssd_model/ssd_mobilenet_v1_coco.config --trained_checkpoint_prefix ./object_detection/training/model.ckpt-5000 --output_directory ./object_detection/ssd_model/model/

轉換後生成的 .pb 模型位於 object_detection/ssd_model/model/ 目錄下

6.測試模型(在object_detection目錄下創建文件seahorse_ssd_detect.py)

import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile

from distutils.version import StrictVersion
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image

# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops

import cv2

if StrictVersion(tf.__version__) < StrictVersion('1.9.0'):
  raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!')



from utils import label_map_util

from utils import visualization_utils as vis_util


global output_num
global output_img_dic

matplotlib.use('TkAgg')



# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_FROZEN_GRAPH =  'ssd_model/model/frozen_inference_graph.pb'

# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'labelmap.pbtxt')

print(PATH_TO_LABELS)


# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(3, 7) ]

# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)

output_num = 1
output_img_dic = r'\output_images'












detection_graph = tf.Graph()
with detection_graph.as_default():
  od_graph_def = tf.compat.v1.GraphDef()
  with tf.io.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
    serialized_graph = fid.read()
    od_graph_def.ParseFromString(serialized_graph)
    tf.import_graph_def(od_graph_def, name='')

print("--\n")

category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)

print("--\n")

def load_image_into_numpy_array(image):
  (im_width, im_height) = image.size
  return np.array(image.getdata()).reshape(
      (im_height, im_width, 3)).astype(np.uint8)






def run_inference_for_single_image(image, graph):
  with graph.as_default():
    with tf.compat.v1.Session() as sess:
      # Get handles to input and output tensors
      ops = tf.compat.v1.get_default_graph().get_operations()
      all_tensor_names = {output.name for op in ops for output in op.outputs}
      tensor_dict = {}
      for key in [
          'num_detections', 'detection_boxes', 'detection_scores',
          'detection_classes', 'detection_masks'
      ]:
        tensor_name = key + ':0'
        if tensor_name in all_tensor_names:
          tensor_dict[key] = tf.compat.v1.get_default_graph().get_tensor_by_name(
              tensor_name)
      if 'detection_masks' in tensor_dict:
        # The following processing is only for single image
        detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
        detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
        # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
        real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
        detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
        detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
        detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
            detection_masks, detection_boxes, image.shape[0], image.shape[1])
        detection_masks_reframed = tf.cast(
            tf.greater(detection_masks_reframed, 0.5), tf.uint8)
        # Follow the convention by adding back the batch dimension
        tensor_dict['detection_masks'] = tf.expand_dims(
            detection_masks_reframed, 0)
      image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')

      # Run inference
      output_dict = sess.run(tensor_dict,
                             feed_dict={image_tensor: np.expand_dims(image, 0)})

      # all outputs are float32 numpy arrays, so convert types as appropriate
      output_dict['num_detections'] = int(output_dict['num_detections'][0])
      output_dict['detection_classes'] = output_dict[
          'detection_classes'][0].astype(np.uint8)
      output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
      output_dict['detection_scores'] = output_dict['detection_scores'][0]
      if 'detection_masks' in output_dict:
        output_dict['detection_masks'] = output_dict['detection_masks'][0]
  return output_dict






def detect(imgfile):
    #origimg = cv2.imread(imgfile)
    image = Image.open(imgfile)

    image_np = load_image_into_numpy_array(image)
    # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
    image_np_expanded = np.expand_dims(image_np, axis=0)
    # Actual detection.
    output_dict = run_inference_for_single_image(image_np, detection_graph)
    # Visualization of the results of a detection.
    vis_util.visualize_boxes_and_labels_on_image_array(
        image_np,
        output_dict['detection_boxes'],
        output_dict['detection_classes'],
        output_dict['detection_scores'],
        category_index,
        instance_masks=output_dict.get('detection_masks'),
        use_normalized_coordinates=True,
        line_thickness=8)
    plt.figure(figsize=IMAGE_SIZE)
    print(1,image_np) 

    cv2.imshow("SSD", image_np)
 
    k = cv2.waitKey(0) & 0xff
        #Exit if ESC pressed
    if k == 27 : return False
    return True

test_dir = "/home/nvidia/.local/lib/python3.6/site-packages/tensorflow/models/research/object_detection/seahorseImages"

for f in os.listdir(test_dir):
    if detect(test_dir + "/" + f) == False:
       break

  
#  if not os.path.exists(output_img_dic):
#      os.mkdir(output_img_dic)
#  output_img_path = os.path.join(output_img_dic,str(output_num)+".png")
#  plt.savefig(output_img_path)

測試(任意鍵下一張圖,ESC退出)

python3 seahorse_ssd_detect.py

 

發佈了19 篇原創文章 · 獲贊 7 · 訪問量 2萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章