【Opencv文檔】 Feature Matching特徵匹配

Opencv官方文檔

Feature Matching
OpenCV-Python Tutorials>Feature Detection and Description
https://docs.opencv.org/trunk/dc/dc3/tutorial_py_matcher.html
在這裏插入圖片描述

中英翻譯

Goal
In this chapter
* We will see how to match features in one image with others.
* 我們將看到如何將一幅圖像中的特徵與其他圖像進行匹配。
*
* We will use the Brute-Force matcher and FLANN Matcher in OpenCV
* 我們將在OpenCV中使用Brute-Force匹配器和FLANN匹配器

Basics of Brute-Force Matcher
Brute-Force matcher is simple. It takes the descriptor of one feature in first set and is matched with all other features in second set using some distance calculation. And the closest one is returned.
Brute-Force(蠻力)匹配器很簡單。它採用第一組中一個特徵的描述符,並使用一些距離計算將其與第二組中的所有其他特徵匹配。 並返回最接近的一個。

For BF matcher, first we have to create the BFMatcher object using cv.BFMatcher(). It takes two optional params. First one is normType. It specifies the distance measurement to be used. By default, it is cv.NORM_L2. It is good for SIFT, SURF etc (cv.NORM_L1 is also there). For binary string based descriptors like ORB, BRIEF, BRISK etc, cv.NORM_HAMMING should be used, which used Hamming distance as measurement. If ORB is using WTA_K = = 3 or 4, cv.NORM_HAMMING2 should be used.
對於BF匹配器,首先我們必須使用cv.BFMatcher()創建BFMatcher對象。它需要兩個可選參數。 第一個參數是normType。 normType指定要使用的距離度量衡。 默認情況下爲cv.NORM_L2。對SIFT,SURF等(也有cv.NORM_L1)很有用。對於基於二進制字符串的描述符,例如ORB,BRIEF,BRISK等,應使用cv.NORM_HAMMING,該函數使用漢明距離作爲度量。如果ORB使用WTA_K == 3或4,則應使用cv.NORM_HAMMING2。

Second param is boolean variable, crossCheck which is false by default. If it is true, Matcher returns only those matches with value (i,j) such that i-th descriptor in set A has j-th descriptor in set B as the best match and vice-versa. That is, the two features in both sets should match each other. It provides consistent result, and is a good alternative to ratio test proposed by D.Lowe in SIFT paper.
第二個參數是布爾變量,即crossCheck,默認情況下爲false。 如果爲true,則Matcher僅返回具有值(i,j)的那些匹配項,以使集合A中的第i個描述符具有集合B中的第j個描述符爲最佳匹配,反之亦然。 即,兩個集合中的兩個特徵應彼此匹配。它提供了一致的結果,並且是D.Lowe在SIFT論文中提出的ratio test的良好替代方案。

Once it is created, two important methods are BFMatcher.match() and BFMatcher.knnMatch(). First one returns the best match. Second method returns k best matches where k is specified by the user. It may be useful when we need to do additional work on that.
創建之後,兩個重要的方法是BFMatcher.match()和BFMatcher.knnMatch()。第一個(BFMatcher.match())返回最佳匹配。第二種方法(BFMatcher.knnMatch())返回k個最佳匹配,其中k由用戶指定。當我們需要對此做其他工作時,它可能會很有用。

Like we used cv.drawKeypoints() to draw keypoints, cv.drawMatches() helps us to draw the matches. It stacks two images horizontally and draw lines from first image to second image showing best matches. There is also cv.drawMatchesKnn which draws all the k best matches. If k=2, it will draw two match-lines for each keypoint. So we have to pass a mask if we want to selectively draw it.
就像我們使用cv.drawKeypoints()繪製關鍵點一樣,cv.drawMatches()可以幫助我們繪製匹配項。 它水平堆疊兩幅圖像,並從第一幅圖像到第二幅圖像繪製線,以顯示最佳匹配。還有cv.drawMatchesKnn繪製所有k個最佳匹配。如果k = 2,它將爲每個關鍵點繪製兩條匹配線。因此,如果要選擇性地繪製,則必須傳遞一個掩膜。

Let’s see one example for each of SIFT and ORB (Both use different distance measurements).
讓我們來看一個SIFT和ORB的示例(兩者都使用不同的距離測量方法)。
Brute-Force Matching with ORB Descriptors使用ORB描述符進行Brute-Force(蠻力)匹配

Here, we will see a simple example on how to match features between two images. In this case, I have a queryImage and a trainImage. We will try to find the queryImage in trainImage using feature matching. ( The images are /samples/data/box.png and /samples/data/box_in_scene.png)
在這裏,我們將看到一個有關如何在兩個圖像之間匹配特徵的簡單示例。 在這種情況下,我有一個queryImage和trainImage。 我們將嘗試使用特徵匹配在trainImage中找到queryImage。 (圖像爲/samples/data/box.png和/samples/data/box_in_scene.png)

We are using ORB descriptors to match features. So let’s start with loading images, finding descriptors etc.
我們正在使用ORB描述符來匹配特徵。 因此,讓我們從加載圖像,查找描述符等開始。

import numpy as np
import cv2 as cv
import matplotlib.pyplot as plt
img1 = cv.imread('box.png',cv.IMREAD_GRAYSCALE)          # queryImage查詢圖片
img2 = cv.imread('box_in_scene.png',cv.IMREAD_GRAYSCALE) # trainImage訓練圖片

#--- Initiate ORB detector --------------------------------------------------
#初始化ORB檢測器
orb = cv.ORB_create()

#--- find the keypoints and descriptors with ORB-----------------------------
#使用ORB查找關鍵點和描述符

'''
# find the keypoints with ORB
kp = orb.detect(img,None)
# compute the descriptors with ORB
kp, des = orb.compute(img, kp)
'''

kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)

Next we create a BFMatcher object with distance measurement cv.NORM_HAMMING (since we are using ORB) and crossCheck is switched on for better results. Then we use Matcher.match() method to get the best matches in two images. We sort them in ascending order of their distances so that best matches (with low distance) come to front. Then we draw only first 10 matches (Just for sake of visibility. You can increase it as you like)
接下來,我們創建一個距離測量方法爲cv.NORM_HAMMING的BFMatcher對象(因爲我們使用的是ORB),並且啓用了crossCheck以獲得更好的結果。然後,我們使用Matcher.match()方法來獲取兩個圖像中的最佳匹配。 我們按照距離的升序對它們進行排序,以使最佳匹配(低距離)排在前面。然後我們只抽出前10個匹配(只是爲了可視化。您可以根據需要增加匹配數量)

#--- create BFMatcher object ------------------------------------------------
#創建一個 BFMatcher匹配對象,距離測量方法是 cv.NORM_HAMMING
bf = cv.BFMatcher(cv.NORM_HAMMING, crossCheck=True)

#--- Match descriptors. ------------------------------------------------------
#匹配描述符(上面已經提前計算了描述符:des1,des2)。
matches = bf.match(des1,des2)

#--- Sort them in the order of their distance. --------------------------------
#按距離排序。
matches = sorted(matches, key = lambda x:x.distance)

#--- Draw first 10 matches. ---------------------------------------------------
#可視化前10個匹配
img3 = cv.drawMatches(img1,kp1,img2,kp2,matches[:10],None,flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
# plt.imshow(img3),plt.show()

plt.imshow(img3)
plt.savefig("test.png")
plt.show()

Below is the result I got:
以下是我得到的結果:
在這裏插入圖片描述
What is this Matcher Object?什麼是Matcher對象?

The result of matches = bf.match(des1,des2) line is a list of DMatch objects. This DMatch object has following attributes:
matchs = bf.match(des1,des2)的結果是DMatch對象的列表。 此DMatch對象具有以下屬性:

* DMatch.distance - Distance between descriptors. The lower, the better it is.
* DMatch.distance----描述符之間的距離。 越低越好。

* DMatch.trainIdx - Index of the descriptor in train descriptors
* DMatch.trainIdx----訓練描述符中描述符的索引

* DMatch.queryIdx - Index of the descriptor in query descriptors
* DMatch.queryIdx----查詢描述符中描述符的索引(待測圖片)

* DMatch.imgIdx - Index of the train image.
* DMatch.imgIdx----訓練圖像的索引。

Brute-Force Matching with SIFT Descriptors and Ratio Test
具有SIFT描述符和Ratio Test的Brute-Force(蠻力)匹配

This time, we will use BFMatcher.knnMatch() to get k best matches. In this example, we will take k=2 so that we can apply ratio test explained by D.Lowe in his paper.
這次,我們將使用BFMatcher.knnMatch()獲得k個最佳匹配。 在此示例中,我們將k = 2,以便可以應用D.Lowe在他的論文中解釋的ratio test。

安裝依賴包:
卸載最新的opencv-contrib-python 4.2.0.34,執行:pip uninstall opencv-contrib-python
安裝pip install -i https://pypi.tuna.tsinghua.edu.cn/simple opencv-contrib-python==3.4.2.16

(opencv_contrib) C:\Users\Administrator>pip uninstall opencv-contrib-python
Found existing installation: opencv-contrib-python 4.2.0.34
Uninstalling opencv-contrib-python-4.2.0.34:
  Would remove:
    e:\e01_pythondevelop\e01_07_anaconda3_2020.02\install_20200622\envs\opencv_contrib\lib\site-packages\cv2\*
    e:\e01_pythondevelop\e01_07_anaconda3_2020.02\install_20200622\envs\opencv_contrib\lib\site-packages\opencv_contrib_python-4.2.0.34.dist-info\*
Proceed (y/n)? y
  Successfully uninstalled opencv-contrib-python-4.2.0.34

(opencv_contrib) C:\Users\Administrator>pip install -i https://pypi.tuna.tsinghua.edu.cn/simple opencv-contrib-python==3.4.2.16
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting opencv-contrib-python==3.4.2.16
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/a2/1c/778cb8a5f4026d49e299d34a98791599f7485553c29889385c43158b6f43/opencv_contrib_python-3.4.2.16-cp37-cp37m-win_amd64.whl (39.6 MB)
     |████████████████████████████████| 39.6 MB 204 kB/s
Requirement already satisfied: numpy>=1.14.5 in e:\e01_pythondevelop\e01_07_anaconda3_2020.02\install_20200622\envs\opencv_contrib\lib\site-packages (from opencv-contrib-python==3.4.2.16) (1.19.0)
Installing collected packages: opencv-contrib-python
Successfully installed opencv-contrib-python-3.4.2.16

注意:cv.SIFT_create()替換成cv.xfeatures2d.SIFT_create()

#Brute-Force Matching with SIFT Descriptors and Ratio Test
#pip install -i https://pypi.tuna.tsinghua.edu.cn/simple opencv-contrib-python==3.4.2.16
#處理器:英特爾 Core i5-7500 @3.4GHZ 四核
#內存:24GB
import numpy as np
import cv2 as cv
print(cv.__version__)#3.4.2
import matplotlib.pyplot as plt
import time
T1=time.perf_counter()
img1 = cv.imread('box.png',cv.IMREAD_GRAYSCALE)          # queryImage
img2 = cv.imread('box_in_scene.png',cv.IMREAD_GRAYSCALE) # trainImage
#--- Initiate SIFT detector -------------------------------------------------------------
#初始化SIFT檢測器
sift=cv.xfeatures2d.SIFT_create()#替換原來的sift = cv.SIFT_create()

#--- find the keypoints and descriptors with SIFT ---------------------------------------
#使用SIFT查找關鍵點和描述符
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)

#--- BFMatcher with default params -------------------------------------------------------
#具有默認參數的BFMatcher
bf = cv.BFMatcher()
matches = bf.knnMatch(des1,des2,k=2)

#--- Apply ratio test --------------------------------------------------------------------
# 應用ratio test
good = []
for m,n in matches:
    if m.distance < 0.75*n.distance:
        good.append([m])

# cv.drawMatchesKnn expects list of lists as matches.
img3 = cv.drawMatchesKnn(img1,kp1,img2,kp2,good,None,flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)

T2=time.perf_counter()
print("消耗的時間是:%.2f ms"%((T2-T1)*1000))#91.07 ms

# plt.imshow(img3),plt.show()

plt.imshow(img3)
plt.savefig("test2.png")
plt.show()

See the result below:
在這裏插入圖片描述
FLANN based Matcher基於FLANN的匹配器

FLANN stands for Fast Library for Approximate Nearest Neighbors. It contains a collection of algorithms optimized for fast nearest neighbor search in large datasets and for high dimensional features. It works faster than BFMatcher for large datasets. We will see the second example with FLANN based matcher.
FLANN代表大約最近鄰(Approximate Nearest Neighbors)的快速庫。它包含一組算法,這些算法針對大型數據集中的快速最近鄰搜索和高維特徵進行了優化。對於大型數據集,它的運行速度比BFMatcher快。 我們將看到基於FLANN的匹配器的第二個示例。

For FLANN based matcher, we need to pass two dictionaries which specifies the algorithm to be used, its related parameters etc. First one is IndexParams. For various algorithms, the information to be passed is explained in FLANN docs. As a summary, for algorithms like SIFT, SURF etc. you can pass following:
對於基於FLANN的匹配器,我們需要傳遞兩個字典,這些字典指定要使用的算法及其相關參數等。第一個是IndexParams。 對於各種算法,要傳遞的信息在FLANN文檔中進行了說明。 概括來說,對於SIFT,SURF等算法,您可以通過以下操作:

FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)

While using ORB, you can pass the following. The commented values are recommended as per the docs, but it didn’t provide required results in some cases. Other values worked fine.:
使用ORB時,您可以傳遞以下內容。 根據文檔建議使用帶註釋的值,但在某些情況下不能提供需要的結果。 其他值工作正常。:

FLANN_INDEX_LSH = 6
index_params= dict(algorithm = FLANN_INDEX_LSH,
table_number = 6, # 12
key_size = 12, # 20
multi_probe_level = 1) #2

Second dictionary is the SearchParams. It specifies the number of times the trees in the index should be recursively traversed. Higher values gives better precision, but also takes more time. If you want to change the value, pass search_params = dict(checks=100).
第二個字典是SearchParams。 它指定索引中的樹應遞歸遍歷的次數。 值越高,精度越高,但也需要花費更多時間。 如果要更改該值,請傳遞search_params = dict(checks = 100)。

With this information, we are good to go.
有了這些信息,我們就準備好演示了。

#FLANN based Matcher
#pip install -i https://pypi.tuna.tsinghua.edu.cn/simple opencv-contrib-python==3.4.2.16
import numpy as np
import cv2 as cv
print(cv.__version__)#3.4.2
import matplotlib.pyplot as plt
import time
T1=time.perf_counter()

img1 = cv.imread('box.png',cv.IMREAD_GRAYSCALE)          # queryImage
img2 = cv.imread('box_in_scene.png',cv.IMREAD_GRAYSCALE) # trainImage

#--- Initiate SIFT detector -------------------------------------------------------------
sift =cv.xfeatures2d.SIFT_create()#替換原來的sift = cv.SIFT_create()

#--- find the keypoints and descriptors with SIFT ---------------------------------------
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)

#--- FLANN parameters -------------------------------------------------------------------
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50)   # or pass empty dictionary
flann = cv.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)

#--- Need to draw only good matches, so create a mask -----------------------------------
matchesMask = [[0,0] for i in range(len(matches))]

#--- ratio test as per Lowe's paper -----------------------------------------------------
for i,(m,n) in enumerate(matches):
    if m.distance < 0.7*n.distance:
        matchesMask[i]=[1,0]

draw_params = dict(matchColor = (0,255,0),
                   singlePointColor = (255,0,0),
                   matchesMask = matchesMask,
                   flags = cv.DrawMatchesFlags_DEFAULT)

img3 = cv.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)
T2=time.perf_counter()
print("消耗的時間是:%.2f ms"%((T2-T1)*1000))#119.03 ms

# plt.imshow(img3),plt.show()

plt.imshow(img3)
plt.savefig("test3.png")
plt.show()

See the result below:
在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章