Mask-RCNN校驗結果可以通過計算mAP值得到一個數值的衡量,在10張圖片上計算平均值,增加更高的準確性。
一、 mAP值的計算
P:precision,即準確率;
R:recall,即 召回率。
PR曲線:即以precision和recall作爲縱、橫軸座標的二維曲線。
AP值:Average Precision,即平均精確度。
mAP值:Mean Average Precision,即平均AP值;是對多個驗證集個體求平均AP值。
二、Mask-RCNN計算mAP代碼
Mask-RCNN計算mAP值時,會從logs文件中找到h5權重文件,然後用於測試。在train_shapes.ipynb文件後面加上以下代碼:
train_shapes.ipynb代碼下載地址:https://download.csdn.net/download/yql_617540298/10546011
(該代碼是在Mask-RCNN原始train_shapes.ipynb代碼基礎上更改的,可以訓練自己的數據集)
用Mask-RCNN訓練自己的數據集遇到的問題可以查看之前的博客
地址如下:https://blog.csdn.net/yql_617540298/article/details/81078405
#mAP
# Compute VOC-Style mAP @ IoU=0.5
# Running on 10 images. Increase for better accuracy.
class InferenceConfig(ShapesConfig):
GPU_COUNT = 1
IMAGES_PER_GPU = 1
inference_config = InferenceConfig()
# Recreate the model in inference mode
model = modellib.MaskRCNN(mode="inference",
config=inference_config,
model_dir=MODEL_DIR)
# Get path to saved weights
# Either set a specific path or find last trained weights
# model_path = os.path.join(ROOT_DIR, ".h5 file name here")
model_path = model.find_last()
# Load trained weights
print("Loading weights from ", model_path)
model.load_weights(model_path, by_name=True)
# Test on a random image
image_id = random.choice(dataset_val.image_ids)
original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, inference_config,
image_id, use_mini_mask=False)
log("original_image", original_image)
log("image_meta", image_meta)
log("gt_class_id", gt_class_id)
log("gt_bbox", gt_bbox)
log("gt_mask", gt_mask)
visualize.display_instances1(original_image, gt_bbox, gt_mask, gt_class_id,
dataset_train.class_names, figsize=(8, 8))
results = model.detect([original_image], verbose=1)
r = results[0]
visualize.display_instances1(original_image, r['rois'], r['masks'], r['class_ids'],
dataset_val.class_names, r['scores'], ax=get_ax())
image_ids = np.random.choice(dataset_val.image_ids, 10)
APs = []
for image_id in image_ids:
# Load image and ground truth data
image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, inference_config,
image_id, use_mini_mask=False)
molded_images = np.expand_dims(modellib.mold_image(image, inference_config), 0)
# Run object detection
results = model.detect([image], verbose=0)
r = results[0]
# Compute AP
AP, precisions, recalls, overlaps =\
utils.compute_ap(gt_bbox, gt_class_id, gt_mask,
r["rois"], r["class_ids"], r["scores"], r['masks'])
APs.append(AP)
print("mAP: ", np.mean(APs))