InputArray() 是一個接口類, 可以傳入多種類型,例如Mat, Mat_<T>, Mat_<T, m,n>, vector<vector<T>>, vector<Mat>等;
因爲在opencv中屬於執行類,所以接口可能會改變,因此有幾點需要注意:
1. 當在opencv的函數中看到出入類型爲InputArray的參數,就可以傳入 Mat,matx等類型
2. 如果傳入值爲空時, 可以用cv::noArray() 替代, 或者用cv::Mat()
3. 該類的設計基本只用於參數的傳遞, 因爲不建議將自己的classmember或者 local, global變量申請爲這種類型;
4. 如果在設計函數的傳入類型時, 希望兼容更多,也可以使用_InputArray 或者OutputArray的類型,同時,在函數內部,使用_InputArray::getMat() 函數爲該參數構建一個矩陣的頭指針, 這樣相當於直接對傳輸參數的內存區域進行了操作,但是並不需要將數據進行拷貝;
opencv中舉例如下:
void myAffineTransform(InputArray _src, OutputArray _dst, InputArray _m)
{
// get Mat headers for input arrays. This is O(1) operation,
// unless _src and/or _m are matrix expressions.
Mat src = _src.getMat(), m = _m.getMat();
CV_Assert( src.type() == CV_32FC2 && m.type() == CV_32F && m.size() == Size(3, 2) );
// [re]create the output array so that it has the proper size and type.
// In case of Mat it calls Mat::create, in case of STL vector it calls vector::resize.
_dst.create(src.size(), src.type());
Mat dst = _dst.getMat();
for( int i = 0; i < src.rows; i++ )
for( int j = 0; j < src.cols; j++ )
{
Point2f pt = src.at<Point2f>(i, j);
dst.at<Point2f>(i, j) = Point2f(m.at<float>(0, 0)*pt.x +
m.at<float>(0, 1)*pt.y +
m.at<float>(0, 2),
m.at<float>(1, 0)*pt.x +
m.at<float>(1, 1)*pt.y +
m.at<float>(1, 2));
}
}
如上例, 傳入的參數_src , 經過 src = _src.getMat() 將操作區域傳給src變量,這樣可以直接對src進行訪問或者查看;
同理, 傳入的_dst變量,在內存空間申請完成後, 就可以用dst = dst_.getMat() 進行矩陣頭複製, 然後接下來對dst矩陣操作即可;
如上操作完成,也無需再進行復制等額外的操作,因爲對dst的賦值,其實直接操作的是_dst的mat區域;
OutputArray是InputArray的派生類。使用時需要注意的問題和InputArray一樣。和InputArray不同的是,需要注意在使用_OutputArray::getMat()之前一定要調用_OutputArray::create()爲矩陣分配空間。可以用_OutputArray::needed()來檢測輸出的矩陣是否需要被計算。有時候傳進去的參不是空就不需要計算
這個在ORB SLAM 項目的提取orb特徵的時候,也是同樣的用法:
void ORBextractor::operator()( InputArray _image, InputArray _mask, vector<KeyPoint>& _keypoints,
OutputArray _descriptors)
{
if(_image.empty())
return;
//這裏將傳入參數_image 傳給 image參數
Mat image = _image.getMat();
assert(image.type() == CV_8UC1 );
// Pre-compute the scale pyramid
ComputePyramid(image);
vector < vector<KeyPoint> > allKeypoints;
ComputeKeyPointsOctTree(allKeypoints);
//ComputeKeyPointsOld(allKeypoints);
Mat descriptors;
//計算總共的kp個數
int nkeypoints = 0;
for (int level = 0; level < nlevels; ++level)
nkeypoints += (int)allKeypoints[level].size();
if( nkeypoints == 0 )
_descriptors.release();
else
{
// 爲參數_descriptors分配內存,並將內存區域指針變量傳入到descriptors 這個Mat中
_descriptors.create(nkeypoints, 32, CV_8U);
descriptors = _descriptors.getMat();
}
_keypoints.clear();
_keypoints.reserve(nkeypoints);
//備註:KEYPOINT 1 與 KEYPOINT 2的值相等,KEYPOINT 3爲KEYPOINT1 的scale倍
int offset = 0;
for (int level = 0; level < nlevels; ++level)
{
vector<KeyPoint>& keypoints = allKeypoints[level];
int nkeypointsLevel = (int)keypoints.size();
if(nkeypointsLevel==0)
continue;
//std::cout<<"KEYPOINT 1: " << keypoints[0].pt<<std::endl;
// preprocess the resized image
Mat workingMat = mvImagePyramid[level].clone();
GaussianBlur(workingMat, workingMat, Size(7, 7), 2, 2, BORDER_REFLECT_101);
// Compute the descriptors
Mat desc = descriptors.rowRange(offset, offset + nkeypointsLevel);
computeDescriptors(workingMat, keypoints, desc, pattern);
//std::cout<<"KEYPOINT 2: " << keypoints[0].pt<<std::endl;
offset += nkeypointsLevel;
// Scale keypoint coordinates
if (level != 0)
{
float scale = mvScaleFactor[level]; //getScale(level, firstLevel, scaleFactor);
for (vector<KeyPoint>::iterator keypoint = keypoints.begin(),
keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint)
keypoint->pt *= scale; // 對kp進行一個scale的放大
}
//std::cout<<"KEYPOINT 3: " << keypoints[0].pt<<std::endl;
// And add the keypoints to the output
_keypoints.insert(_keypoints.end(), keypoints.begin(), keypoints.end());
}
}
參考:https://blog.csdn.net/yang_xian521/article/details/7755101