(12)基於OpenCV的dnn模塊使用YOLOv3進行目標檢測

0、說明:

測試的opencv版本爲opencv3.4.5

電腦cup:intel 4代i5(4200U)

1、YOLO介紹:

YOLO詳解(知乎)

2、下載yolov3的配置文件:

wget https://github.com/pjreddie/darknet/blob/master/data/coco.names?raw=true -O ./coco.names
wget https://github.com/pjreddie/darknet/blob/master/cfg/yolov3.cfg?raw=true -O ./yolov3.cfg
wget https://pjreddie.com/media/files/yolov3.weights

3、C++代碼:

#include <iostream>
#include <fstream>
#include <opencv2/opencv.hpp>
#include <opencv2/dnn.hpp>
#include <opencv2/dnn/shape_utils.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>

// Remove the bounding boxes with low confidence using non-maxima suppression
void postprocess(cv::Mat& frame, std::vector<cv::Mat>& outs);

// Get the names of the output layers
std::vector<cv::String> getOutputsNames(const cv::dnn::Net& net);

// Draw the predicted bounding box
void drawPred(int classId, float conf, int left, int top, int right, int bottom, cv::Mat& frame);

// Initialize the parameters
float confThreshold = 0.5; // Confidence threshold
float nmsThreshold = 0.4;  // Non-maximum suppression threshold
int inpWidth = 416;        // Width of network's input image
int inpHeight = 416;       // Height of network's input image

std::vector<std::string> classes;

int main(int argc, char** argv)
{
    // Load names of classes
    std::string classesFile = "/home/alan/Desktop/yolov3/coco.names";
    std::ifstream classNamesFile(classesFile.c_str());
    if (classNamesFile.is_open())
    {
        std::string className = "";
        while (std::getline(classNamesFile, className))
            classes.push_back(className);
    }
    else{
        std::cout<<"can not open classNamesFile"<<std::endl;
    }

    // Give the configuration and weight files for the model
    cv::String modelConfiguration = "/home/alan/Desktop/yolov3/yolov3.cfg";
    cv::String modelWeights = "/home/alan/Desktop/yolov3/yolov3.weights";

    // Load the network
    cv::dnn::Net net = cv::dnn::readNetFromDarknet(modelConfiguration, modelWeights);
    std::cout<<"Read Darknet..."<<std::endl;
    net.setPreferableBackend(cv::dnn::DNN_BACKEND_OPENCV);
    net.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);

    // Process frames.
    std::cout <<"Processing..."<<std::endl;
    cv::VideoCapture cap(0);
    cv::Mat frame;
    while (cv::waitKey(10) != 27)
    {
        // get frame from the video
        cap >> frame;

        //show frame
        cv::imshow("frame",frame);

        // Create a 4D blob from a frame.
        cv::Mat blob;
        cv::dnn::blobFromImage(frame, blob, 1/255.0, cv::Size(inpWidth, inpHeight), cv::Scalar(0,0,0), true, false);

        //Sets the input to the network
        net.setInput(blob);

        // Runs the forward pass to get output of the output layers
        std::vector<cv::Mat> outs;
        net.forward(outs, getOutputsNames(net));

        // Remove the bounding boxes with low confidence
        postprocess(frame, outs);

        // Put efficiency information. The function getPerfProfile returns the
        // overall time for inference(t) and the timings for each of the layers(in layersTimes)
        std::vector<double> layersTimes;
        double freq = cv::getTickFrequency() / 1000;
        double t = net.getPerfProfile(layersTimes) / freq;
        std::string label = cv::format("Inference time for a frame : %.2f ms", t);
        cv::putText(frame, label, cv::Point(0, 15), cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(0, 0, 255));

        // Write the frame with the detection boxes
        cv::Mat detectedFrame;
        frame.convertTo(detectedFrame, CV_8U);
        //show detectedFrame
        cv::imshow("detectedFrame",detectedFrame);
    }
    cap.release();
    std::cout<<"Esc..."<<std::endl;
    return 0;
}

// Get the names of the output layers
std::vector<cv::String> getOutputsNames(const cv::dnn::Net& net)
{
    static std::vector<cv::String> names;
    if (names.empty())
    {
        //Get the indices of the output layers, i.e. the layers with unconnected outputs
        std::vector<int> outLayers = net.getUnconnectedOutLayers();

        //get the names of all the layers in the network
        std::vector<cv::String> layersNames = net.getLayerNames();

        // Get the names of the output layers in names
        names.resize(outLayers.size());
        for (size_t i = 0; i < outLayers.size(); ++i)
            names[i] = layersNames[outLayers[i] - 1];
    }
    return names;
}

// Remove the bounding boxes with low confidence using non-maxima suppression
void postprocess(cv::Mat& frame, std::vector<cv::Mat>& outs)
{
    std::vector<int> classIds;
    std::vector<float> confidences;
    std::vector<cv::Rect> boxes;

    for (size_t i = 0; i < outs.size(); ++i)
    {
        // Scan through all the bounding boxes output from the network and keep only the
        // ones with high confidence scores. Assign the box's class label as the class
        // with the highest score for the box.
        float* data = (float*)outs[i].data;
        for (int j = 0; j < outs[i].rows; ++j, data += outs[i].cols)
        {
            cv::Mat scores = outs[i].row(j).colRange(5, outs[i].cols);
            cv::Point classIdPoint;
            double confidence;
            // Get the value and location of the maximum score
            cv::minMaxLoc(scores, 0, &confidence, 0, &classIdPoint);

            if (confidence > confThreshold)
            {
                int centerX = (int)(data[0] * frame.cols);
                int centerY = (int)(data[1] * frame.rows);
                int width = (int)(data[2] * frame.cols);
                int height = (int)(data[3] * frame.rows);
                int left = centerX - width / 2;
                int top = centerY - height / 2;

                classIds.push_back(classIdPoint.x);
                confidences.push_back((float)confidence);
                boxes.push_back(cv::Rect(left, top, width, height));
            }
        }
    }

    // Perform non maximum suppression to eliminate redundant overlapping boxes with
    // lower confidences
    std::vector<int> indices;
    cv::dnn::NMSBoxes(boxes, confidences, confThreshold, nmsThreshold, indices);
    for (size_t i = 0; i < indices.size(); ++i)
    {
        int idx = indices[i];
        cv::Rect box = boxes[idx];
        drawPred(classIds[idx], confidences[idx], box.x, box.y,
                 box.x + box.width, box.y + box.height, frame);
    }
}

// Draw the predicted bounding box
void drawPred(int classId, float conf, int left, int top, int right, int bottom, cv::Mat& frame)
{
    //Draw a rectangle displaying the bounding box
    cv::rectangle(frame, cv::Point(left, top), cv::Point(right, bottom), cv::Scalar(0, 0, 255));

    //Get the label for the class name and its confidence
    std::string label = cv::format("%.2f", conf);
    if (!classes.empty())
    {
        CV_Assert(classId < (int)classes.size());
        label = classes[classId] + ":" + label;
    }
    else
    {
        std::cout<<"classes is empty..."<<std::endl;
    }

    //Display the label at the top of the bounding box
    int baseLine;
    cv::Size labelSize = cv::getTextSize(label, cv::FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);
    top = std::max(top, labelSize.height);
    cv::putText(frame, label, cv::Point(left, top), cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(255,255,255));
}

4、解析:

(1)cv::dnn::blobFromImage函數解析:

函數原型:

  CV_EXPORTS_W Mat blobFromImage(InputArray image, double scalefactor=1.0, const Size& size = Size(),
                                   const Scalar& mean = Scalar(), bool swapRB=true, bool crop=true);

參數解析: 

第一個參數,image,表示輸入的圖像,可以是opencv的mat數據類型。

第二個參數,scalefactor,這個參數很重要的,如果訓練時,是歸一化到0-1之間,那麼這個參數就應該爲0.00390625f (1/256),否則爲1.0

第三個參數,size,應該與訓練時的輸入圖像尺寸保持一致。

第四個參數,mean,這個主要在caffe中用到,caffe中經常會用到訓練數據的均值。tf中貌似沒有用到均值文件。

第五個參數,swapRB,是否交換圖像第1個通道和最後一個通道的順序。

第六個參數,crop,輸入圖像大小與size不符的時候,如果爲true,就是裁剪圖像,如果爲false,就是等比例放縮圖像。

(2)cv::dnn::Net::forward函數解析:

forward的函數原型有4個,分別提供了不同的功能:

  • 第一個
cv::dnn::Net::forward(const String & outputName = String())

這個函數只需要提供layer的name即可;函數返回一個Mat變量,返回值是指輸入的layername首次出現的輸出。

默認輸出整個網絡的運行結果。

  • 第二個
void cv::dnn::Net::forward(OutputArrayOfArrays outputBlobs,
const String & outputName = String())	

該函數的返回值是void,通過OutputArrayOfArrays類型提供計算結果,類型爲blob。這個outputName依然是layer的name,outputBlobs不是首次layer的輸出了,而是layername指定的layer的全部輸出;多次出現,就提供多個輸出。

  • 第三個
void cv::dnn::Net::forward(OutputArrayOfArrays outputBlobs,
const std::vector<String> & outBlobNames)

該函數返回值爲void,outBlobNames是需要提供輸出的layer的name,類型爲vector,也就是說可以提供多個layer的那麼;它會將每個layer的首次計算輸出放入outputBlobs。

  • 第四個
void cv::dnn::Net::forward(std::vector<std::vector<Mat>> & outputBlobs, 
const std::vector<String> & outBlobNames)

 該函數的功能是最強大的,返回值爲void;輸入outBlobNames是vector類型,outputBlobs是vector<std::vector<Mat>>類型;該函數可以輸入多個layer的name;它會輸出每個layer的全部輸出到outputBlobs中。

3)網絡前向傳播後的輸出格式解析:

Mat outs是一個輸入圖像經過網絡前向傳播後輸出的85*845矩陣,其定義如下圖所示:

行向量解析:前面4個元素是用來標記目標在圖像上的位置的(被歸一化了),第5個元素是置信概率,值域爲[0-1](用來與閾值作比較決定是否標記目標),後面80個爲基於COCO數據集的80分類的標記權重,最大的爲輸出分類。

列向量解析:爲什麼是845個?因爲YOLO會把圖像分成13*13的網格,每個網格預測5個BOX(所以就是13*13*5=845),每個BOX就是一個行向量。

5、結果:

參考:

Deep Learning based Object Detection using YOLOv3 with OpenCV ( Python / C++ )

YOLOv3在OpenCV4.0.0/OpenCV3.4.2上的C++ demo實現

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章