deeplab v3 訓練自己的圖片

1.deeplab 源碼地址 https://github.com/tensorflow/models 在reserach/deeplab

(1)下載源碼 

git clone https://github.com/tensorflow/models.git

(2)在/research/路徑下,加入slim路徑

export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim

source ~/.bashrc

(3)測試

python deeplab/model_test.py

如果出現

表示配置成功

2.製作數據:參考(https://blog.csdn.net/malvas/article/details/90776327

(1)labelme

下載地址:https://github.com/wkentaro/labelme

(2)標註需要分割的物體保存爲json格式文件

(3)把json轉換爲voc格式

找到路徑:\labelme-master\examples\semantic_segmentation

把自己的數據放在data_annotated文件中,把裏面原來的數據刪除。然後把labels.txe改成自己的類別

然後執行

python labelme2voc.py data_annotated data_dataset_voc --labels labels.txt

會生成data_dataset_voc文件夾,其包含了

在這裏插入圖片描述

Convert to 灰度圖

在下載後的文件中,找到models/research/deeplab/datasets

# from models/research/deeplab/datasets
python remove_gt_colormap.py \
  --original_gt_folder="/path/SegmentationClassPNG" \
  --output_dir="/path/SegmentationClassRaw"

Convert to tfrecord

# from /root/models/research/deeplab/datasets/
python ./build_voc2012_data.py \
  --image_folder="/root/data/image" \
  --semantic_segmentation_folder="/root/data/mask" \
  --list_folder="/root/data/index" \
  --image_format="jpg" \
  --output_dir="/root/data/tfrecord"

3.訓練

(1)修改訓練文件

在models-master\research\deeplab\deprecated\segmentation_dataset.py中大概110 添加一段代碼

_MYDATA_INFORMATION = DatasetDescriptor(
    splits_to_sizes={
        'train': 1500,  # 訓練集數量
        'val': 300,  # 測試集數量
    },
    num_classes=4,
    ignore_label=255,
)

然後在113行把自己的數據集添加進去

_DATASETS_INFORMATION = {
    'cityscapes': _CITYSCAPES_INFORMATION,
    'pascal_voc_seg': _PASCAL_VOC_SEG_INFORMATION,
    'ade20k': _ADE20K_INFORMATION,
    'mydata':_MYDATA_INFORMATION # 添加自己的數據集
}

在\models-master\research\deeplab\utils\train_utils.py中210行修改

  # Variables that will not be restored.
  exclude_list = ['global_step','logits']
  if not initialize_last_layer:
    exclude_list.extend(last_layers)

在\models-master\research\deeplab\trian.py中大概157行修改

flags.DEFINE_boolean('initialize_last_layer', False,
                     'Initialize the last layer.')

flags.DEFINE_boolean('last_layers_contain_logits_only', True,
                     'Only consider logits as last layers or not.')

訓練指令

# from /root/models/research/
python deeplab/train.py \
    --logtostderr \
    --num_clones=1 \
    --training_number_of_steps=100000 \
    --train_split="train" \
    --model_variant="xception_71" \
    --atrous_rates=6 \
    --atrous_rates=12 \
    --atrous_rates=18 \
    --output_stride=16 \
    --decoder_output_stride=4 \
    --train_crop_size=513,513
    --train_batch_size=3\
    --dataset="mydata" \
    --fine_tune_batch_norm=True \
    --tf_initial_checkpoint='./deeplab/backbone/xception_71/model.ckpt' 
    --train_logdir='./deeplab/train/' 
    --dataset_dir='./deeplab/data/tfrecord/'

其中model_variant 是你使用預訓練的模型,下載地址爲:https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md

...
INFO:tensorflow:global step 98250: loss = 1.9128 (0.731 sec/step)
INFO:tensorflow:global step 98260: loss = 3.2374 (0.740 sec/step)
INFO:tensorflow:global step 98270: loss = 1.3137 (0.736 sec/step)
INFO:tensorflow:global step 98280: loss = 3.3541 (0.732 sec/step)
INFO:tensorflow:global step 98290: loss = 1.1512 (0.740 sec/step)
INFO:tensorflow:global step 98300: loss = 1.8416 (0.735 sec/step)
INFO:tensorflow:global step 98310: loss = 1.5447 (0.753 sec/step)
...

4.測試

(1)指令

python deeplab/vis.py \
    --logtostderr \
    --vis_split="val" \
    --model_variant="xception_71" \
    --atrous_rates=6 \
    --atrous_rates=12 \
    --atrous_rates=18 \
    --output_stride=16 \
    --decoder_output_stride=4 \
    --vis_crop_size=512,640
    --dataset="mydata" \
    --colormap_type="pascal" \
    --checkpoint_dir='./deeplab/train/' \
    --vis_logdir='./deeplab/vis/' \
    --dataset_dir='/deeplab/data/tfrecord/'

注意填寫真實圖片的大小,寬度在前,高度在後

數據集自提:

5.模型保存成pb格式(參考:http://www.pythonheidong.com/blog/article/10596/#deeplab_pdfrozen_to_pd

python deeplab/export_model.py \
  --logtostderr \
  --checkpoint_path="./deeplab/train/model.ckpt-30000" \
  --export_path="./model.pb" \
  --model_variant="xception_71" \
  --atrous_rates=6 \
  --atrous_rates=12 \
  --atrous_rates=18 \
  --output_stride=16 \
  --decoder_output_stride=4 \
  --num_classes=3 \
  --crop_size=512,640
  --inference_scales=1.0

圖片大小爲,真是預測圖片大小

import tensorflow as tf
import numpy as np
import cv2 as cv
import os
from keras.preprocessing.image import load_img, img_to_array
from matplotlib import pyplot as plt


img_path = "..."  # 原圖存放文件夾路徑
graph_path = ".../.../.pb"  # .pd模型路徑
pre_path = "..."  #  預測出的圖片存放位置

graph = tf.Graph()
INPUT_TENSOR_NAME = 'ImageTensor:0'
OUTPUT_TENSOR_NAME = 'SemanticPredictions:0'
graph_def = None
with tf.gfile.FastGFile(graph_path, 'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())

if graph_def is None:
    raise RuntimeError('Cannot find inference graph in tar archive.')

with graph.as_default():
    tf.import_graph_def(graph_def, name='')

sess = tf.Session(graph=graph)

for filename in os.listdir(img_path):
    prename = filename[0:-4] + ".png"   #預測輸出保存爲 .png格式
    file_path = ori_path + "/" + filename
    save_path = pre_path + '/' + prename
    img = load_img(file_path)
    img = img_to_array(img)
    img = np.expand_dims(img, axis=0).astype(np.uint8)

    result = sess.run(
        OUTPUT_TENSOR_NAME,
        feed_dict={INPUT_TENSOR_NAME: img})

    cv.imwrite(save_path, result.transpose((1, 2, 0)))

預測的圖片是全黑色,數值乘以100可以顯示

# -*- coding: utf-8 -*-
# @Time    : 2019/11/20 14:22
# @Author  : Don
# @File    : read.py
# @Software: PyCharm
import cv2
img=cv2.imread("2.png")
img=img*100
cv2.namedWindow('contours', 0)
cv2.imshow("contours", img)

cv2.waitKey(0)
cv2.destroyAllWindows()
發佈了61 篇原創文章 · 獲贊 2 · 訪問量 7512
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章