環境要求:CUDA9.0(與你的編譯caffe時使用的cuda版本保持一致),opencv3.1(注意博主親試3.4和4.0均有錯,3.4可以編譯但結果不對並不能resize),ubuntu16.04
具體後處理code使用https://github.com/wzj5133329/retinaface_caffe 提供。
1. 首先準備好一個正常的編譯好的可運行的caffe,我這裏參考 https://github.com/eric612/MobileNet-YOLO 提供的版本,您可指定任意可以正常運行的caffe包,推薦https://github.com/weiliu89/caffe 。將caffe編譯好(編譯方法網上教程一籮筐)
2. 安裝opencv3.1.0 source release,具體安裝方法請百度。
3. 下載retinaface_caffe code並解壓,隨意路徑放置,無需放置在caffe目錄下。
4. 按如下參考修改makefile
.PHONY: all test clean deps tags
CXX=g++
CXXFLAGS += -g -Wall -O -std=c++11
原始爲:OPENCVLIBS = `pkg-config opencv --cflags --libs`(即在環境變量pkg-config指定種獲取opencv)
修改爲(具體的指定):
OPENCVLIBS = -L/home/sh00245/opencv/opencv3.1.0/lib -lopencv_cudabgsegm -lopencv_cudaobjdetect -lopencv_cudastereo -lopencv_shape -lopencv_stitching -lopencv_cudafeatures2d -lopencv_superres -lopencv_cudacodec -lopencv_videostab -lopencv_cudaoptflow -lopencv_cudalegacy -lopencv_calib3d -lopencv_features2d -lopencv_objdetect -lopencv_highgui -lopencv_videoio -lopencv_photo -lopencv_imgcodecs -lopencv_cudawarping -lopencv_cudaimgproc -lopencv_cudafilters -lopencv_video -lopencv_ml -lopencv_imgproc -lopencv_flann -lopencv_cudaarithm -lopencv_core -lopencv_cudev
原始爲:DEPS_INCLUDE_PATH= $(DLIB_PATH) -I /usr/local/cuda-10.0/include/ -I /home/asd/Project/MobileNet-YOLO-master2/include
修改爲:(添加opencv頭文件路徑,添加caffe相關頭文件路徑)
DEPS_INCLUDE_PATH= $(DLIB_PATH) -I /home/sh00245/opencv/opencv3.1.0/include/opencv -I /home/sh00245/opencv/opencv3.1.0/include -I /opt/cuda/cuda-9.0_cudnn75/include/ -I /home/sh00245/py2env/MobileNet-YOLO-app/include -I /home/sh00245/py2env/MobileNet-YOLO-app/build/include
TARGET = retinaface
原始爲:LIBS= -lboost_system -lcaffe -lglog -lprotobuf -lcudart -lgflags
修改爲:(添加cuda lib庫和caffe lib庫文件路徑)
LIBS= -lboost_system -lcaffe -lglog -lprotobuf -lcudart -lgflags -L /home/sh00245/py2env/MobileNet-YOLO-app/build/lib -L /opt/cuda/cuda-9.0_cudnn75/lib64
OBJS := $(patsubst %.cpp,%.o,$(wildcard *.cpp))
$(TARGET): $(OBJS)
$(CXX) -o $@ $^ $(LIBS) $(OPENCVLIBS) $(DEPS_LIB_PATH)
%.o:%.cpp
$(CXX) -c $(CXXFLAGS) $< $(DEPS_INCLUDE_PATH)
clean:
rm -f *.o $(TARGET)
5. 修改main.cpp
#include"detect.h"
using namespace std;
int main(int argc, char** argv) {
const string proto="mnet.prototxt";
const string model="mnet.caffemodel";
const float confidence=0.5;
const float nms_threshold = 0.4;
const string cpu_mode="gpu";
Detector detector(proto,model,confidence,nms_threshold,cpu_mode);
cv::VideoCapture video_frame;
//video_frame.open(0);
cv::Mat img;
//while(video_frame.read(img))
//cv::Mat input = img.clone();
img=cv::imread("demo.jpg");
cout<<"channels"<<img.channels()<<endl;
//cv::resize(img,img,cv::Size(1920,1080));
cout<<"img10"<<img.rows<<endl;
uint64_t time1=current_timestamp();
std::vector<Anchor> result = detector.Detect(img);
for(int i = 0; i < result.size(); i ++) {
cv::rectangle(img, cv::Point((int)result[i].finalbox.x, (int)result[i].finalbox.y), cv::Point((int)result[i].finalbox.width, (int)result[i].finalbox.height), cv::Scalar(0, 255, 255), 2, 8, 0);
cv::circle(img, result[i].pts[0], 10, cv::Scalar(0, 255, 0),-1);
cv::circle(img, result[i].pts[1], 10, cv::Scalar(0, 255, 0),-1);
cv::circle(img, result[i].pts[2], 10, cv::Scalar(0, 255, 0),-1);
cv::circle(img, result[i].pts[3], 10, cv::Scalar(0, 255, 0),-1);
cv::circle(img, result[i].pts[4], 10, cv::Scalar(0, 255, 0),-1);
}
uint64_t time2=current_timestamp();
cout<<"time:"<<time2-time1<<endl;
cv::namedWindow("show",CV_WINDOW_NORMAL);
cv::resizeWindow("show",960,540);
cv::imshow("show",img);
cv::waitKey(0);
cv::imwrite("result.jpg", img);
return 0;
}
去掉死循環while(true)。
6. cd到retinaface_caffe下,執行make指令
7. 編譯完成後再retinaface_caffe路徑下執行指令./retinaface
以上過程嚴格執行應當可以順利編譯。
記錄之前遇到的錯誤:
a. ‘cv::Rect2f’ has not been declared
error: ‘Rect2f’ in namespace ‘cv’ does not name a type
opencv版本的問題,3.0以下會有這種問題
b. 編譯成功執行時Segmentation fault (core dumped)
opencv使用3.4版本編譯成功,但執行時會報此錯誤,原因是detect.cpp中執行resize後不能正確輸出的問題,若去掉resize,在外部先resize到prototxt中定義的大小後再執行code,能運行並避免該錯誤,但不能得到正確的結果。更換opencv3.1後解決。