文章目錄
立體匹配算法分類
1、OpenCV實現的立體匹配算法:可以分爲2類:全局匹配算法與局部匹配算法。全局匹配算法精度高,缺點是計算速度慢,實時性差;局部匹配算法分爲基於區域的匹配,基於特徵的匹配;局部匹配計算速度快,可以滿足實時性要求,但是精度較全局匹配差;基於特徵的匹配只能得到稀疏的視差,需要靠插值完成視差圖的重建;
2、全局匹配算法:OpenCV實現了SGBM(半全局塊匹配算法),GC算法(OpenCV3.0以後版本沒有實現);
3、基於區域的局部匹配算法:OpenCV實現了BM(塊匹配算法);
4、基於特徵的局部匹配算法:OpenCV實現了10種特徵檢測匹配方法:
1. FAST --FastFeatureDetector;
2. STAR --StarFeatureDetector;
3. SIFT --尺度不變特徵變換;
4. SURF --加速魯棒特徵;
5. ORB
6. MSER
7. GFTT
8. HARRIS
9. Dense
10.SimpleBlob
SURF特徵點檢測原理
1、SURF : SpeededUp Robust Features , 直譯“加速版的具有魯棒性的特徵算法” , SURF是尺度不變特徵變換算法(SIFT)的加速版,計算速度更快,在多幅圖片下具有更好的穩定性;SURF最大的特徵在於採用了Harr特徵以及積分圖像的概念,大大加快了程序的運行時間;
2、SURF算法原理:
①構建Hessian矩陣,構建高斯金字塔尺度空間;
②利用非極大值抑制初步確定特徵點;
③精確定位極值點(採用三維線性插值法得到亞像素級的特徵點);
④選取特徵點的主方向:
Sift選取特徵點主方向是採用在特徵點領域內統計其梯度直方圖,取直方圖bin值最大的以及超過bin值80%的哪些方向作爲特徵點的主方向;
Surf則是不統計其梯度直方圖,而是統計特徵點領域內的harr小波特徵,取最大值那個扇形的方向作爲特徵點的主方向;
⑤構造surf特徵點描述子(Descriptor):
Sift算法是在特徵點周圍取1616的領域,將該領域化爲44個小區域,每個小區域統計8個方向梯度,最後得到448 = 128維的向量,作爲該點的Sift描述子;
Surf算法中,在特徵點周圍取20s(s是所檢測到該特徵點所在的尺度),然後將該框分爲16個子區域,每個區域計算harr小波特徵的4個值,所以每個特徵點就是16*4=64維的向量,相比Sift,特徵點描述子少了一半,特徵匹配速度大大加快;
3、OpenCV3.0以後版本中,SURF、SurfFeatureDetector、SurfDescriptorExtractor代表相同含義(typedef定義的別名);(類繼承關係)
4、SurfFeatureDetector類常用的方法:detect()(尋找特徵點)、compute()(計算特徵點的描述子);
繪製關鍵點與KeyPoint類
1、drawKeyPoints()函數:第5個參數flag : 繪製關鍵點的特徵標識符(注意不同參數效果)
(Google翻譯:)
2、KeyPoint關鍵點:(使用描述符描述其特徵信息)
關鍵點是角點概念的擴展,對來自圖像的小的局部像斑的信息進行編碼,使關鍵點具有高度可辨別性,關鍵點的描述性信息被概括成描述符的形式,其描述符的維度通常比形成關鍵點的像素像斑低得多。(摘自Learning OpenCV3)
_size : 特徵點領域直徑
_angle : 特徵點方向(0-360),(-1表示不使用)
_response : 關鍵點強度;
_octave : 關鍵點所在的圖像金字塔的組;
_class_id : 用於聚類的id;
3、Surf、Sift是OpenCV的nofree模塊,需要安裝opencv_contrib附加庫;(CMake編譯時需要選擇編譯平臺爲x64!)
Surf特徵點檢測demo1
#include <opencv2/opencv.hpp>
#include <opencv2/xfeatures2d.hpp> //添加Surf
#include <iostream>
using namespace std;
using namespace cv;
using namespace cv::xfeatures2d; //添加命名空間
int main(int argc, char** argv)
{
Mat src = imread("00.jpg", IMREAD_COLOR);
imshow("input", src);
int minHessian = 1000;
Ptr<SURF> detector = SURF::create(minHessian);
vector<KeyPoint> keypoint_1;
detector->detect(src, keypoint_1); //檢測SURF特徵關鍵點
//繪製KeyPoints
Mat img_keypoint_1;
drawKeypoints(src, keypoint_1, img_keypoint_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT);
//顯示KeyPoints
imshow("detect", img_keypoint_1);
waitKey(0);
return 0;
}
Surf特徵描述子計算與特徵匹配(暴力匹配BFMatch)
1、Surf算法爲每個檢測到的特徵定義了位置與尺度,尺度值可用於定義圍繞特徵點的窗口大小,不論物體的尺度在窗口是什麼樣的,都將包含相同的視覺信息,這些信息用於表示特徵點以使得它們與衆不同;
2、OpenCV中,使用Surf進行特徵點描述與匹配主要是drawMatches()方法與BruteForceMatcher類使用;
3、drawMatcher() : 繪製兩幅圖像中的匹配點
matchMask : 確定哪些匹配是會繪製出來的掩膜,如果掩膜爲空,表示所有匹配都進行繪製;
4、BFMatcher類常用方法:
train() :
訓練一個描述符匹配器(例如,flann索引)。在所有要匹配的方法中,方法train()每次在匹配之前都會運行。一些描述符匹配器(例如,BruteForceMatcher)具有此方法的空實現。其他的描述符匹配器真的訓練他們的內部結構(例如,FlannBasedMatcher訓練flann::Index)。
match() : 查詢列表,並與訓練好的字典中的描述符進行比較,查詢列表上每個關鍵點與列表中的“最佳匹配”匹配;
knnMatch() : k-最鄰近匹配;
radiusMatch() : 半徑匹配,返回與查詢描述符特定距離內的所有匹配;
5、特徵點匹配步驟:
①Surf算子計算特徵點;
②Surf算子計算特徵點描述子;
③BruteForceMatcher中的函數match()強行匹配兩幅圖像的特徵向量;
Surf特徵點暴力匹配demo2
#include <opencv2/opencv.hpp>
#include <opencv2/xfeatures2d.hpp> //添加Surf
#include <iostream>
using namespace std;
using namespace cv;
using namespace cv::xfeatures2d; //添加命名空間
int main(int argc, char** argv)
{
Mat src1 = imread("1.jpg");
Mat src2 = imread("2.jpg");
Ptr<SURF> surfdect = SURF::create(1500);
//計算Surf特徵點
vector<KeyPoint> keypoint_src1;
vector<KeyPoint> keypoint_src2;
surfdect->detect(src1, keypoint_src1);
surfdect->detect(src2, keypoint_src2);
//計算描述符(特徵向量)
Mat descriptor_src1;
Mat descriptor_src2;
surfdect->compute(src1, keypoint_src1, descriptor_src1);
surfdect->compute(src2, keypoint_src2, descriptor_src2);
//匹配兩幅圖像中的描述子
vector<DMatch> dMatch;
Ptr<BFMatcher> bfMatch = BFMatcher::create(NORM_L2, false);
bfMatch->match(descriptor_src1,descriptor_src2,dMatch);
//繪製匹配點
Mat imgMatches;
drawMatches(src1, keypoint_src1, src2, keypoint_src2, dMatch, imgMatches);
imshow("BF Match", imgMatches);
waitKey(0);
return 0;
}
暴力匹配結果:(許多誤匹配點)
快速近似最近鄰逼近搜索函數庫(FLANN)
1、FLANN : Fast Library for Approximate Nearest Neighbors;
2、FlannBasedMatcher類繼承DescriptorMatcher類;使用match()方法匹配;
3、match()方法:
4、DMatch類的四個主要屬性:用來存儲描述符匹配結果
FLANN匹配demo3
#include <opencv2/opencv.hpp>
#include <opencv2/xfeatures2d.hpp> //添加Surf
#include <iostream>
using namespace std;
using namespace cv;
using namespace cv::xfeatures2d; //添加命名空間
int main(int argc, char** argv)
{
Mat src1 = imread("1.jpg");
Mat src2 = imread("2.jpg");
//計算Surf特徵點
Ptr<SURF> surfdect = SURF::create(1000);
vector<KeyPoint> keypoint_src1;
vector<KeyPoint> keypoint_src2;
surfdect->detect(src1, keypoint_src1);
surfdect->detect(src2, keypoint_src2);
//計算描述符(特徵向量)
Mat descriptor_src1;
Mat descriptor_src2;
surfdect->compute(src1, keypoint_src1, descriptor_src1);
surfdect->compute(src2, keypoint_src2, descriptor_src2);
//使用FLANN算法匹配描述符向量
vector<DMatch> dMatch;
Ptr<FlannBasedMatcher> flannMatcher = FlannBasedMatcher::create();
flannMatcher->match(descriptor_src1, descriptor_src2, dMatch);
double max_dist = 0, min_dist = 100;
//快速計算關鍵點之間的最大和最小距離
for (int i = 0; i < descriptor_src1.rows; i++)
{
double dist = dMatch[i].distance;
if (dist < min_dist) min_dist = dist;
if (dist > max_dist) max_dist = dist;
}
//輸出距離信息
printf("Max dist : %f \n", max_dist);
printf("Min dist : %f \n", min_dist);
//儲存符合條件的匹配結果(dist < 2 * min_dist)
vector<DMatch> good_match;
for (int i = 0; i < descriptor_src1.rows; i++)
{
if (dMatch[i].distance < 2 * min_dist)
{
good_match.push_back(dMatch[i]);
}
}
//繪製符合條件的匹配點
Mat imgMatch;
drawMatches(src1, keypoint_src1, src2, keypoint_src2, good_match, imgMatch,
Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
//輸出相關點匹配信息
for (int i = 0; i < good_match.size(); i++)
{
printf("> 符合條件的匹配點 [%d] 特徵點1:%d --- 特徵點2: %d \n",
i, good_match[i].queryIdx, good_match[i].trainIdx);
}
//顯示匹配結果
imshow("FLANN Match", imgMatch);
waitKey(0);
return 0;
}
FLANN+SURFdemo4
#include <opencv2/opencv.hpp>
#include <opencv2/xfeatures2d.hpp> //添加Surf
#include <iostream>
using namespace std;
using namespace cv;
using namespace cv::xfeatures2d; //添加命名空間
int main(int argc, char** argv)
{
Mat trainImg = imread("3.jpg");
Mat trainImg_gray;
imshow("input", trainImg);
cvtColor(trainImg, trainImg_gray, CV_BGR2GRAY);
//檢測Surf關鍵點,提取訓練圖像描述符
vector<KeyPoint> train_keypoints;
Mat trainDescriptor;
Ptr<SURF> surfDetector = SURF::create(1000);
surfDetector->detect(trainImg, train_keypoints); //尋找特徵點
surfDetector->compute(trainImg, train_keypoints, trainDescriptor); //計算描述符
//創建基於FLANN的描述符匹配對象
FlannBasedMatcher flannMatch;
vector<Mat> train_desc_collection(1, trainDescriptor);
flannMatch.add(train_desc_collection);
flannMatch.train();
//創建視頻對象,定義幀率
VideoCapture capture(0);
unsigned int frameCount = 0;
while (char(waitKey(1)) != 'q')
{
int64 time0 = getTickCount();
Mat testImage, testImage_gray;
//capture >> testImage;
capture.read(testImage);
if (testImage.empty())
continue;
cvtColor(testImage, testImage_gray, CV_BGR2GRAY);
//檢測關鍵點,計算特徵向量(描述符)
vector<KeyPoint> test_keypoint;
Mat test_descriptor;
surfDetector->detect(testImage_gray, test_keypoint);
surfDetector->compute(testImage_gray, test_keypoint, test_descriptor);
//匹配訓練和測試描述符
vector<vector<DMatch>> matches;
flannMatch.knnMatch(test_descriptor, matches, 2);
//根據勞式算法得到優秀匹配點
vector<DMatch> goodMatches;
for (unsigned int i = 0; i < matches.size(); i++)
{
if (matches[i][0].distance < 0.6 * matches[i][1].distance)
{
goodMatches.push_back(matches[i][0]);
}
}
//繪製匹配點並顯示窗口
Mat dstImage;
drawMatches(testImage, test_keypoint, trainImg, train_keypoints, goodMatches, dstImage);
imshow("匹配窗口", dstImage);
//輸出幀率信息
cout << "當前幀率爲: " << getTickFrequency() / (getTickCount() - time0) << endl;
}
return 0;
}
幀率偏低,算法效率有待加強:
BF+SIFT特徵匹配demo5
1、理論上SURF是SIFT速度的3倍;
#include <opencv2/opencv.hpp>
#include <opencv2/xfeatures2d.hpp> //添加Surf
#include <iostream>
using namespace std;
using namespace cv;
using namespace cv::xfeatures2d; //添加命名空間
int main(int argc, char** argv)
{
Mat trainImg = imread("3.jpg");
Mat trainImg_gray;
imshow("input", trainImg);
cvtColor(trainImg, trainImg_gray, CV_BGR2GRAY);
//檢測Sift關鍵點,提取訓練圖像描述符
vector<KeyPoint> train_keypoints;
Mat trainDescriptor;
Ptr<SIFT> siftDetector = SIFT::create(); //使用智能指針創建SIFI對象
siftDetector->detect(trainImg, train_keypoints); //尋找特徵點
siftDetector->compute(trainImg, train_keypoints, trainDescriptor); //計算描述符
//創建基於Brute Force的描述符匹配對象
BFMatcher bfMatcher;
vector<Mat> train_desc_collection(1, trainDescriptor);
bfMatcher.add(train_desc_collection);
bfMatcher.train();
//創建視頻對象,定義幀率
VideoCapture capture(0);
unsigned int frameCount = 0;
while (char(waitKey(1)) != 'q')
{
int64 time0 = getTickCount();
Mat testImage, testImage_gray;
capture >> testImage;
//capture.read(testImage);
if (testImage.empty())
continue;
cvtColor(testImage, testImage_gray, CV_BGR2GRAY);
//檢測關鍵點,計算特徵向量(描述符)
vector<KeyPoint> test_keypoint;
Mat test_descriptor;
siftDetector->detect(testImage_gray, test_keypoint);
siftDetector->compute(testImage_gray, test_keypoint, test_descriptor);
//匹配訓練和測試描述符
vector<vector<DMatch>> matches;
bfMatcher.knnMatch(test_descriptor, matches, 2);
//根據勞式算法得到優秀匹配點
vector<DMatch> goodMatches;
for (unsigned int i = 0; i < matches.size(); i++)
{
if (matches[i][0].distance < 0.6 * matches[i][1].distance)
{
goodMatches.push_back(matches[i][0]);
}
}
//繪製匹配點並顯示窗口
Mat dstImage;
drawMatches(testImage, test_keypoint, trainImg, train_keypoints, goodMatches, dstImage);
imshow("匹配窗口", dstImage);
//輸出幀率信息
cout << "當前幀率爲: " << getTickFrequency() / (getTickCount() - time0) << endl;
}
return 0;
}
筆記本運算速度較慢,SIFT有明顯卡頓;
尋找已知物體demo6
1、在FLANN特徵匹配的基礎上,還可以進一步用Homography映射找出已知物體:具體的爲使用findHomography()計算單應性矩陣,使用perspectiveTransform()計算映射點羣;
2、findHomography()
: 計算源圖像與目標圖像之間的透視變換H;
srcPoints : CV_32FC2 的矩陣類型或vector< Point2f >
dstPoints : CV_32FC2 的矩陣類型或vector< Point2f >
3、perspectiveTransform()
: 進行向量透視矩陣變換;
m: 33 或44浮點型矩陣;
4、demo6
#include <opencv2/opencv.hpp>
#include <opencv2/xfeatures2d.hpp> //添加Surf
#include <iostream>
using namespace std;
using namespace cv;
using namespace cv::xfeatures2d; //添加命名空間
int main(int argc, char** argv)
{
Mat src1 = imread("1.jpg");
Mat src2 = imread("2.jpg");
if (!src1.data || !src2.data)
{
cout << "load the image failed..." << endl;
return -1;
}
//檢測Surf特徵點
Ptr<SURF> surfDetector = SURF::create(1000);
vector<KeyPoint> keypoint_src1;
vector<KeyPoint> keypoint_src2;
surfDetector->detect(src1, keypoint_src1);
surfDetector->detect(src1, keypoint_src2);
//計算描述符
Mat descriptor_src1, descriptor_src2;
surfDetector->compute(src1, keypoint_src1, descriptor_src1);
surfDetector->compute(src2, keypoint_src2, descriptor_src2);
//FLANN 特徵點匹配
FlannBasedMatcher flannMatcher;
vector<DMatch> dMatch;
flannMatcher.match(descriptor_src1, descriptor_src2, dMatch);
double max_dist = 0, min_dist = 100; //最小距離和最大距離
for (int i = 0; i < dMatch.size(); i++)
{
double dist = dMatch[i].distance;
if (dist < min_dist) min_dist = dist;
if (dist > max_dist) max_dist = dist;
}
printf(">Max dist : %f\n", max_dist);
printf(">Min dist : %f\n", min_dist);
//存下匹配距離小於 3 * min_dist 的點對
vector<DMatch> good_Match;
for (int i = 0; i < dMatch.size(); i++)
{
if (dMatch[i].distance < 3 * min_dist)
{
good_Match.push_back(dMatch[i]);
}
}
//繪製出匹配到的關鍵點
Mat img_matches;
drawMatches(src1, keypoint_src1, src2, keypoint_src2, good_Match, img_matches,
Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
//定義2個局部變量
vector<Point2f> obj;
vector<Point2f> scene;
//從匹配成功的匹配對中獲取關鍵點
for (int i = 0; i < good_Match.size(); i++)
{
obj.push_back(keypoint_src1[good_Match[i].queryIdx].pt);
scene.push_back(keypoint_src2[good_Match[i].trainIdx].pt);
}
//計算透視變換
Mat H = findHomography(obj, scene, CV_RANSAC);
//從待測圖片中獲取角點
vector<Point2f> obj_corners(4);
obj_corners[0] = Point(0, 0);
obj_corners[1] = Point(src1.cols, 0);
obj_corners[2] = Point(src1.cols, src1.rows);
obj_corners[3] = Point(0, src1.rows);
vector<Point2f> scene_corners(4);
//進行透視變換
perspectiveTransform(obj_corners, scene_corners,H);
//繪製出角點之間的直線
line(img_matches, scene_corners[0] + Point2f(static_cast<float>(src1.cols), 0),
scene_corners[1] + Point2f(static_cast<float>(src1.cols), 0),
Scalar(255, 0, 123), 4);
line(img_matches, scene_corners[1] + Point2f(static_cast<float>(src1.cols), 0),
scene_corners[2] + Point2f(static_cast<float>(src1.cols), 0),
Scalar(255, 0, 123), 4);
line(img_matches, scene_corners[2] + Point2f(static_cast<float>(src1.cols), 0),
scene_corners[3] + Point2f(static_cast<float>(src1.cols), 0),
Scalar(255, 0, 123), 4);
line(img_matches, scene_corners[3] + Point2f(static_cast<float>(src1.cols), 0),
scene_corners[0] + Point2f(static_cast<float>(src1.cols), 0),
Scalar(255, 0, 123), 4);
//顯示最終結果
imshow("Good Matches & Object detection", img_matches);
waitKey(0);
return 0;
}
ORB特徵提取簡介
1、ORB : ORiented Brief簡稱,是brief算法改進版,2011年在《ORB: an efficient alternative to SIFT or SURF》論文中被提出,據說,ORB算法綜合性能在各種測評裏相較於其他特徵提取算法是最好的。
2、Brief : Binary Robust Independent Elementary Features的縮寫,主要思路: 在特徵點附近隨機選取若干點對,將這些點對的灰度值大小組成一個二進制串,並將這個二進制串作爲該特徵點的描述子。
3、BRIEF 優點是速度,缺點:不具備旋轉不變性、對噪聲敏感、不具備尺度不變性;
4、統計數據,ORB速度是SIFI的100倍,SURF的10倍
5、 ORB繼承自Feature2D
typedef ORB OrbFeatureDetector;
typdef ORB OrbDescriptorExtractor;
class CV_EXPORTS_W ORB : public Feature2D
{
}
ORB + FLANN-LSH(位置敏感哈希索引)特徵點匹配demo7
#include <opencv2/opencv.hpp>
#include <opencv2/xfeatures2d.hpp> //添加Surf
#include <iostream>
using namespace std;
using namespace cv;
using namespace cv::xfeatures2d; //添加命名空間
int main(int argc, char** argv)
{
Mat src = imread("3.jpg");
if (!src.data)
{
cout << "could not load the image..." << endl;
return -1;
}
Mat src_gray;
cvtColor(src, src_gray, CV_BGR2GRAY);
Ptr<ORB> orbDetector = ORB::create();
vector<KeyPoint> keyPoint_src;
Mat descriptor_src;
orbDetector->detect(src_gray, keyPoint_src); //尋找特徵點
orbDetector->compute(src_gray, keyPoint_src, descriptor_src); //計算特徵向量
//基於Flann的描述符對象匹配
flann::Index flannIndex(descriptor_src, flann::LshIndexParams(12, 20, 2), cvflann::FLANN_DIST_HAMMING);
//初始化視頻採集對象
VideoCapture cap(0);
cap.set(CV_CAP_PROP_FRAME_WIDTH, 360); //設置採集視頻的寬度高度
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 900);
unsigned int frameCount = 0;
while (1)
{
double time0 = static_cast<double>(getTickCount()); //記錄起始時間
Mat captureImage, captureImage_gray;
cap >> captureImage;
if (captureImage.empty())
continue;
cvtColor(captureImage, captureImage_gray, CV_BGR2GRAY);
//尋找ORB特徵點並計算特徵點描述符
vector<KeyPoint> keypoint_cap;
Mat descriptor_cap;
orbDetector->detect(captureImage_gray, keypoint_cap); //計算特徵點
orbDetector->compute(captureImage_gray, keypoint_cap, descriptor_cap); //計算描述符
//匹配和測試描述符,獲取2個最鄰近的描述符
Mat matchIndex(descriptor_cap.rows, 2, CV_32SC1);
Mat matchDistance(descriptor_cap.rows, 2, CV_32FC1);
//調用K鄰近算法
flannIndex.knnSearch(descriptor_cap, matchIndex, matchDistance, 2, flann::SearchParams());
//根據 Lowe's algorithm 選出最佳匹配
vector<DMatch> goodMatches;
for (int i = 0; i < matchDistance.rows; i++)
{
if (matchDistance.at<float>(i, 0) < 0.6 * matchDistance.at<float>(i, 1))
{
DMatch dmatches(i, matchIndex.at<int>(i, 0), matchDistance.at<float>(i, 0));
goodMatches.push_back(dmatches);
}
}
//繪製顯示匹配窗口
Mat resultImage;
drawMatches(src, keyPoint_src, captureImage, keypoint_cap, goodMatches, resultImage);
imshow("匹配窗口", resultImage);
//顯示幀率
cout << "> 幀率: " << getTickFrequency() / (getTickCount() - time0) << endl;
if (char(waitKey(1)) == 27) break;
}
return 0;
}