Opencv249和Opencv3.0以上的 SolvePnp函數詳解(附帶程序、算例)

最近要做一個算法,用到了位姿估計。位姿估計的使用範圍非常廣泛。主要解決的問題爲:在給出2D-3D若干點對以及相片的內參信息,如何求得相機中心在世界座標系下的座標以及相機的方向(旋轉矩陣)。爲此筆者做了大量研究,看了許多主流的文章,也是用了許多相關的函數庫。主要有OpenMVG、OpenGV、OpenCV這三種。這三個庫雖然都集成了EPnp、Upnp、P3P等多種算法,但實際差別還是很大。這一篇博客主要對opencv中的SolvePnp算法做一個總結以及各類實驗。

PnP的具體原理我就不過多解釋了,這裏放幾個鏈接供大家學習:

首先是Opencv的官方文檔:https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#solvepnp

然後是一篇解釋Pnp的很有用的一篇文章:http://blog.csdn.net/cocoaqin/article/details/77841261

然後是Epnp的英文論文:https://pdfs.semanticscholar.org/6ed0/083ff42ac966a6f37710e0b5555b98fd7565.pdf

論文記得用谷歌打開。

我們就直接上實驗吧。先看原始數據

333.965515 110.721138 45.893448 164.000000  1909.000000
327.870117 110.772079 45.835598 2578.000000 1970.000000
327.908630 115.816376 43.036915 2858.000000 3087.000000
333.731628 115.862755 43.031918 95.000000 3051.000000
330.019196 110.630211 45.871151 1727.000000 1942.000000
330.043457 115.823494 43.027847 1855.000000 3077.000000
331.949371 115.881104 43.020267 943.000000 3072.000000
331.943970 110.598534 45.909332 962.000000 1926.000000

沒錯,用空格分開。前三列依次爲x,y,z座標,後兩列爲圖像的像素座標。


然後我們看看SovePnp的源碼都需要些什麼

我們發現除了函數本身定義之外上面還有許多參數解釋。嗯,Opencv果然是專業。

然後我們再看Opencv源碼調用:

bool solvePnP( InputArray _opoints, InputArray _ipoints,
               InputArray _cameraMatrix, InputArray _distCoeffs,
               OutputArray _rvec, OutputArray _tvec, bool useExtrinsicGuess, int flags )
{
    CV_INSTRUMENT_REGION()

    Mat opoints = _opoints.getMat(), ipoints = _ipoints.getMat();
    int npoints = std::max(opoints.checkVector(3, CV_32F), opoints.checkVector(3, CV_64F));
    CV_Assert( npoints >= 0 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F)) );

    Mat rvec, tvec;
    if( flags != SOLVEPNP_ITERATIVE )
        useExtrinsicGuess = false;

    if( useExtrinsicGuess )
    {
        int rtype = _rvec.type(), ttype = _tvec.type();
        Size rsize = _rvec.size(), tsize = _tvec.size();
        CV_Assert( (rtype == CV_32F || rtype == CV_64F) &&
                   (ttype == CV_32F || ttype == CV_64F) );
        CV_Assert( (rsize == Size(1, 3) || rsize == Size(3, 1)) &&
                   (tsize == Size(1, 3) || tsize == Size(3, 1)) );
    }
    else
    {
        _rvec.create(3, 1, CV_64F);
        _tvec.create(3, 1, CV_64F);
    }
    rvec = _rvec.getMat();
    tvec = _tvec.getMat();

    Mat cameraMatrix0 = _cameraMatrix.getMat();
    Mat distCoeffs0 = _distCoeffs.getMat();
    Mat cameraMatrix = Mat_<double>(cameraMatrix0);
    Mat distCoeffs = Mat_<double>(distCoeffs0);
    bool result = false;

    if (flags == SOLVEPNP_EPNP || flags == SOLVEPNP_DLS || flags == SOLVEPNP_UPNP)
    {
        Mat undistortedPoints;
        undistortPoints(ipoints, undistortedPoints, cameraMatrix, distCoeffs);
        epnp PnP(cameraMatrix, opoints, undistortedPoints);

        Mat R;
        PnP.compute_pose(R, tvec);
        Rodrigues(R, rvec);
        result = true;
    }
    else if (flags == SOLVEPNP_P3P)
    {
        CV_Assert( npoints == 4);
        Mat undistortedPoints;
        undistortPoints(ipoints, undistortedPoints, cameraMatrix, distCoeffs);
        p3p P3Psolver(cameraMatrix);

        Mat R;
        result = P3Psolver.solve(R, tvec, opoints, undistortedPoints);
        if (result)
            Rodrigues(R, rvec);
    }
    else if (flags == SOLVEPNP_ITERATIVE)
    {
        CvMat c_objectPoints = opoints, c_imagePoints = ipoints;
        CvMat c_cameraMatrix = cameraMatrix, c_distCoeffs = distCoeffs;
        CvMat c_rvec = rvec, c_tvec = tvec;
        cvFindExtrinsicCameraParams2(&c_objectPoints, &c_imagePoints, &c_cameraMatrix,
                                     c_distCoeffs.rows*c_distCoeffs.cols ? &c_distCoeffs : 0,
                                     &c_rvec, &c_tvec, useExtrinsicGuess );
        result = true;
    }
    else
        CV_Error(CV_StsBadArg, "The flags argument must be one of SOLVEPNP_ITERATIVE, SOLVEPNP_P3P, SOLVEPNP_EPNP or SOLVEPNP_DLS");
    return result;
}
我們發現Epnp、Upnp以及DLS調用的是同一個接口。喂喂,這也太敷衍了吧。。。按理說應該是不一樣纔對。也許以後會改吧。

然後P3P算法要求輸入的控制點個數只能是4對。

好,沒問題了,我們準備好圖像,然後開始調用。

我們首先構造好K矩陣,fx、fy都是以像素爲單位的焦距。我們必須事先知道焦距,或者能從Exif中解析出焦距信息。好,我們直接上核心代碼。至於貼圖函數

void MappingCloud(VertexPositionColor*&CloudPtr, int PointNum,cv::Mat IMG, cv::Mat K, cv::Mat nin_R, cv::Mat T, bool ifout , std::string OutColorCloudFileName)
{
	cout << "Coloring Cloud..." << endl;
	std::ofstream CloudOuter;
	if (ifout == true)
		CloudOuter.open(OutColorCloudFileName);
	double reProjection = 0;
	for (int i = 0; i < PointNum; i++)
	{
		//計算3D點雲在影像上的投影位置(像素)
		cv::Mat SingleP(cv::Matx31d(//這裏必須是Matx31d,如果是f則會報錯。opencv還真是嬌氣
			CloudPtr[i].x,
			CloudPtr[i].y,
			CloudPtr[i].z));
		cv::Mat change = K*(nin_R*SingleP + T);
		change /= change.at<double>(2, 0);
		//cout << change.t() << endl;
		int xPixel = cvRound(change.at<double>(0, 0));
		int yPixel = cvRound(change.at<double>(1, 0));
		//取得顏色
		uchar* data = IMG.ptr<uchar>(yPixel);
		int Blue =data[xPixel * 3+0]; //第row行的第col個像素點的第一個通道值 Blue
		int Green = data[xPixel * 3 + 1]; // Green
		int Red = data[xPixel * 3 + 2]; // Red
		//顏色賦值
		CloudPtr[i].R = Red;
		CloudPtr[i].G = Green;
		CloudPtr[i].B = Blue;
		if (ifout == true)
			CloudOuter << CloudPtr[i].x << " " << CloudPtr[i].y << " " << CloudPtr[i].z << " " << CloudPtr[i].R << " " << CloudPtr[i].G << " " << CloudPtr[i].B << endl;
	}
	if (ifout == true)
		CloudOuter.close();
	cout << "done!" << endl;
}
std::string intToStdtring(int Num)
{
	std::stringstream strStream;
	strStream << Num;
	std::string s = std::string(strStream.str());
	return s;
}
void OpencvPnPTest()//測試opencv的Pnp算法
{
	std::ifstream reader(".\\TextureMappingData\\楊芳8點座標.txt");
	if (!reader)
	{
		std::cout << "打開錯誤!";
		system("pause");
		exit(0);
	}
	string imgName = std::string(".\\TextureMappingData\\IMG_0828.JPG");
	cv::Mat IMG = cv::imread(".\\TextureMappingData\\IMG_0828.JPG");
	double IMGW = IMG.cols;//楊芳8點座標.txt
	double IMGH = IMG.rows;
	double RealWidth = 35.9;
	double CCDWidth = RealWidth / (IMGW >= IMGH ? IMGW : IMGH);//一定要取較長的那條邊
	double f = 100.0;
	double fpixel = f / CCDWidth;
	cv::Mat K_intrinsic(cv::Matx33d(
		fpixel, 0, IMGH / 2.0,
		0, fpixel, IMGW / 2.0,
		0, 0, 1));
	int UsePointCont = 5;
	vector<cv::Point3f> ConP3DVec;
	vector<cv::Point2f> ConP2DVec;
	cv::Point3f ConP3D;
	cv::Point2f ConP2D;
	cout << "AllPoints: " << endl;
	while (reader >> ConP3D.x)
	{
		reader >> ConP3D.y;
		reader >> ConP3D.z;
		reader >> ConP2D.x;
		reader >> ConP2D.y;
		ConP3DVec.push_back(ConP3D);
		ConP2DVec.push_back(ConP2D);
		cout << setprecision(10)
			<< ConP3D.x << " " << ConP3D.y << " " << ConP3D.z << " "
			<< ConP2D.x << " " << ConP2D.y << endl;
		if (ConP3DVec.size() == UsePointCont)
			break;
	}
	reader.close();
	cout << "BaseInformation: " << endl;
	cout << "imgName: " << imgName<< endl;
	cout << "width: " << IMGW << endl;
	cout << "height: " << IMGH << endl;
	cout << "UsePointCont: " << UsePointCont << endl;
	cout << "RealWidth: " << RealWidth << endl;
	cout << "camerapixel: " << CCDWidth << endl;
	cout << "f: " << f << endl;
	cout << "fpixel: " << fpixel << endl;
	cout << "KMatrix: " << K_intrinsic << endl;

	//對於後母戊鼎opencv原始幾種解法報告:
	//對於8個點的情況:EPnP表現良好,DLS表現良好,EPnp與Upnp調用的函數接口相同
	//對於4個點的情況:P3P表現良好,EPnp表現良好,然而P3P實際輸入點數爲4個,那麼最後一個點用於檢核。
	//Epnp在4點的時候表現時好時壞,和控制點的選取狀況相關。倘若增加爲5個點,則精度有着明顯提升
	//EPnP在4點的情況下貼圖完全錯誤,比後方交會的效果還要更加扭曲
	//4點DLS的方法取得了和EPnp同樣的貼圖結果,從結果來看非常扭曲
	//4點的ITERATOR方法取得了很好的貼圖效果,甚至比P3P還要好,因爲P3P只用三個點參與計算
	//也就是說,在控制點對數量較少的情況下(4點),只有P3P可以給出正確結果
	//增加一對控制點(變爲5對)之後,Epnp的魯棒性迅速提升,可以得到正確結果,DLS也同樣取得了正確結果
	//增加到5對控制點後P3P失效,因爲只讓用4個點,不多不少.
	//同樣的,5點的ITERATOR方法結果正確。
	//----------結論:使用多於4點的EPnP最爲穩妥.所求得的矩陣爲計算機視覺矩陣
	//int flag = cv::SOLVEPNP_EPNP;std::string OutCloudPath = std::string(".\\Output\\EPNP_") + intToStdtring(UsePointCont) + std::string("Point_HMwuding.txt");
	//int flag = cv::SOLVEPNP_DLS;std::string OutCloudPath = std::string(".\\Output\\DLS_") + intToStdtring(UsePointCont) + std::string("Point_HMwuding.txt");  
	//int flag = cv::SOLVEPNP_P3P;std::string OutCloudPath = std::string(".\\Output\\P3P_") + intToStdtring(UsePointCont) + std::string("Point_HMwuding.txt");
	int flag = cv::SOLVEPNP_ITERATIVE; std::string OutCloudPath = std::string(".\\Output\\ITERATIVE_") + intToStdtring(UsePointCont) + std::string("Point_HMwuding.txt");
	cout << endl << "Soving Pnp Using Method" << flag<<"..."<< endl;
	cv::Mat Rod_r ,TransMatrix ,RotationR;
	bool success = cv::solvePnP(ConP3DVec, ConP2DVec, K_intrinsic, cv::noArray(), Rod_r, TransMatrix,false, flag);
	Rodrigues(Rod_r, RotationR);//將旋轉向量轉換爲羅德里格旋轉矩陣
	cout << "r:" << endl << Rod_r << endl;
	cout << "R:" << endl << RotationR << endl;
	cout << "T:" << endl << TransMatrix << endl;
	cout << "C(Camera center:):" << endl << -RotationR.inv()*TransMatrix << endl;//這個C果然是相機中心,十分準確
	Comput_Reprojection(ConP3DVec, ConP2DVec, K_intrinsic, RotationR, TransMatrix);

	//=====下面讀取點雲並進行紋理映射
	cout << "Reading Cloud..." << endl;
	std::string CloudPath = std::string(".\\PointCloud\\HMwuding.txt");
	ScarletCloudIO CloudReader(CloudPath);
	CloudReader.Read();
	CloudReader.PrintStationStatus();
	VertexPositionColor *CloudPtr = CloudReader.PtCloud().PointCloud;
	int CloudNum = CloudReader.PtCloud().PointNum;
	MappingCloud(CloudPtr,CloudNum, IMG, K_intrinsic,//點雲、數量、內參
		RotationR, TransMatrix,true, OutCloudPath);//旋轉、平移(並非是相機中心)、點雲最終的放置路徑

	//下面可以繼續研究旋轉平移以及比例縮放的影響
}

實驗的報告以及調整的參數都已經在註釋中寫清楚了。下面我給出這幾個貼圖效果。我們採用的命名規則爲:方法_控制點個數

首先是原始



------------------DLS_4Point_HMwuding:(錯誤)



------------------DLS_5Point_HMwuding:(正確)



-----------------------EPNP_4Point_HMwuding.txt(錯誤)



-----------------------EPNP_5Point_HMwuding.txt(正確)



---------------------ITERATIVE_4Point_HMwuding.txt(正確)



-----------------P3P_4Point_HMwuding.txt(正確)


基本上就這麼多了

發佈了41 篇原創文章 · 獲贊 62 · 訪問量 7萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章