kinect2在VS中的使用

前言:記錄一下以前的筆記

1.kinectV2軟件開發工具包的下載及調試

kinetSDK-v2.0_1409-Setup官網下載地址
kinect for Windows version 2.0 SDK官方文檔
kinect2完全拆解
Kinect數據提取與座標變換

1.1 安裝

SDK包下載下來後雙擊很容易就安裝好,然後根據提示重啓電腦,如果安裝過程出現下圖情況的話,那麼請將你的電腦重置系統(如果重置完系統後還是出現安裝失敗的話,說明電腦系統未能成功重置,請再次重新配置一下,我之前就出現過這種情況,再重配一下就好了)
image.png
:我之前一開始出現這種情況時去網上查找解決方案,大都說是讓把剛下載的SDK包全部卸載乾淨,然後再重新下載,網上有些人成功了,不過我嘗試後還是出現了相同的問題,後來想到之前自己把電腦上一些看着不太順眼的東西卸載了好多,估計就是那時候出現了問題,我猜應該是把微軟自帶的驅動給一同卸載了。

1.2 調試

將kinectv2接到USB3.0(電腦藍色口)看相機會有反應,因爲win10會自動安裝kinectv2的主要驅動。打開SDK Browser,打開相機的自檢程序,出現下圖情況則表示kinectv2已經連接完成了。
image.png
:USB那一行顯示感嘆號,表示接觸不良,並不影響相機的其他調試,無需在意。

2. VS2013的下載與安裝

2.1 下載

VS2013下載地址
image.png
VS2013官方文檔
:我只下載了截圖裏圈出來的旗艦版和密鑰(用來註冊產品),如果打開不了鏈接的話,估計是因爲我爲了圖方便以及下載速度快些,選擇了學校下載地址,大家也可以去微信公衆號“軟件安裝管家”裏面下載(很好的公衆號,極力推薦)

2.2 安裝

參照微信公衆號“軟件安裝管家”中的教程來

3. opencv的安裝配置

opencv3.0下載地址
image.png
image.png
image.png

3.1 配置環境變量

控制面板→系統→高級系統設置→環境變量→新建系統變量
image.png
控制面板→系統→高級系統設置→環境變量→Path
image.png

  • x86和x64分別表示32bit和64bit的VS工程,vc10, vc11, vc12 分別表示VS2010, VS2012, VS2013的Visual Studio使用的編譯器版本。
  • 環境變量設置好後重啓電腦,保證系統環境變量立即生效,避免一系列路徑相關問題。

3.2 opencv在VS中的配置

將opencv3.0的屬性表先添加到文本文件裏面(如下所示)

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <ImportGroup Label="PropertySheets" />
  <PropertyGroup Label="UserMacros" />
  <PropertyGroup>
    <IncludePath>$(OPENCV)\include;$(IncludePath)</IncludePath>
    <LibraryPath Condition="'$(Platform)'=='Win32'">$(OPENCV)\x86\vc12\lib;$(OPENCV)\x86\vc12\staticlib;$(LibraryPath)</LibraryPath>
    <LibraryPath Condition="'$(Platform)'=='X64'">$(OPENCV)\x64\vc12\lib;$(OPENCV)\x64\vc12\staticlib;$(LibraryPath)</LibraryPath>
  </PropertyGroup>
  <ItemDefinitionGroup>
    <Link Condition="'$(Configuration)'=='Debug'">
      <AdditionalDependencies>opencv_ts300d.lib;opencv_world300d.lib;IlmImfd.lib;libjasperd.lib;libjpegd.lib;libpngd.lib;libtiffd.lib;libwebpd.lib;opencv_calib3d300d.lib;opencv_core300d.lib;opencv_features2d300d.lib;opencv_flann300d.lib;opencv_highgui300d.lib;opencv_imgcodecs300d.lib;opencv_imgproc300d.lib;opencv_ml300d.lib;opencv_objdetect300d.lib;opencv_photo300d.lib;opencv_shape300d.lib;opencv_stitching300d.lib;opencv_superres300d.lib;opencv_ts300d.lib;opencv_video300d.lib;opencv_videoio300d.lib;opencv_videostab300d.lib;zlibd.lib;%(AdditionalDependencies)
      </AdditionalDependencies>
    </Link>
    <Link Condition="'$(Configuration)'=='Release'">
      <AdditionalDependencies>opencv_ts300.lib;opencv_world300.lib;IlmImf.lib;ippicvmt.lib;libjasper.lib;libjpeg.lib;libpng.lib;libtiff.lib;libwebp.lib;opencv_calib3d300.lib;opencv_core300.lib;opencv_features2d300.lib;opencv_flann300.lib;opencv_highgui300.lib;opencv_imgcodecs300.lib;opencv_imgproc300.lib;opencv_ml300.lib;opencv_objdetect300.lib;opencv_photo300.lib;opencv_shape300.lib;opencv_stitching300.lib;opencv_superres300.lib;opencv_ts300.lib;opencv_video300.lib;opencv_videoio300.lib;opencv_videostab300.lib;zlib.lib;%(AdditionalDependencies)
      </AdditionalDependencies>
    </Link>
  </ItemDefinitionGroup>
  <ItemGroup />
</Project>

image.png

其中利用文本文件將屬性表內容添加到其中,再到生成屬性表這個過程,我在網上找了不少案例,都說把內容添加到文本文件後直接將後綴名改成 props的格式就行了,但是到我這裏就不行啊,改完後綴名後再來查看它的屬性會發現依舊是txt的格式,這一點真的是讓人糟心啊,雖然在這點上浪費了不少時間但好在最終找到了正確的方法,這裏將我的過程截圖出來希望能夠幫助大家避免走彎路,大家在另存爲的時候一定要記得將保存類型選爲“所有文件”(我在截圖裏特意畫了星星)。

3.3 新建vs工程測試配置是否成功

打開VS2013→文件→新建→項目→Visual C++→Win32控制檯應用程序→新建工程名opencv_test→確定→下一步→空項目打勾→完成
image.png
image.png
image.png
搜索欄搜索“屬性管理器”→右擊opencv_test→添加現有屬性表→找到屬性表路徑打開
image.png
搜索欄搜索"解決方案資源管理器"(Ctrl+Alt+L)→右擊源文件→添加→新建項→Visual C++→C++文件→源文件命名爲test(會自動添加後綴)
image.png
添加下面代碼到程序中

#include <opencv2\opencv.hpp>
#include <iostream>
#include <string>
using namespace cv;
using namespace std;
int main()
{
	Mat img = imread("baobao.jpg");
	if (img.empty())
	{
		cout << "error";
		return -1;
	}
	imshow("baby:", img);
	waitKey();

	return 0;
}

找到與之前建立工程名相同的文件夾雙擊打開→添加baobao圖片
image.png
image.png
回到程序中,按F5啓動調試,就能看到寶寶照
image.png

4. Kinectv2在VS中(結合opencv使用)的配置

4.1 新建工程

同opencv一致

4.2 Kinectv2的配置

搜索欄搜索“屬性管理器”→右鍵Debug|Win32,選擇“添加新項目屬性表”→名稱命爲“kinect_project.props",路徑放在用於保存VS項目的主文件夾中(方便後面用kinectv2時可以直接“添加現有屬性表”找到這裏的“kinect_project.props"即可)
image.png
雙擊kinect_project.props→VC++ 目錄→包含目錄添加kinect的inc→庫目錄添加kinect×86位的lib→連接器的輸入添加"Kinect20.lib"
image.png
image.png
image.png

4.3 添加下面代碼到程序中

#include <Kinect.h>		//Kinect的頭文件
#include <iostream>
#include <opencv2\highgui.hpp>			//opencv頭文件
using   namespace   std;
using   namespace   cv;
int main(void)
{
	IKinectSensor   * mySensor = nullptr;
	GetDefaultKinectSensor(&mySensor);  //獲取感應器
	mySensor->Open();           //打開感應器
	IDepthFrameSource   * mySource = nullptr;   //取得深度數據
	mySensor->get_DepthFrameSource(&mySource);
	int height = 0, width = 0;
	IFrameDescription   * myDescription = nullptr;  //取得深度數據的分辨率
	mySource->get_FrameDescription(&myDescription);
	myDescription->get_Height(&height);
	myDescription->get_Width(&width);
	myDescription->Release();
	IDepthFrameReader   * myReader = nullptr;
	mySource->OpenReader(&myReader);    //打開深度數據的Reader
	IDepthFrame * myFrame = nullptr;
	Mat temp(height, width, CV_16UC1);    //建立圖像矩陣
	Mat img(height, width, CV_8UC1);
	while (1)
	{
		if (myReader->AcquireLatestFrame(&myFrame) == S_OK) //通過Reader嘗試獲取最新的一幀深度數據,放入深度幀中,並判斷是否成功獲取
		{
			myFrame->CopyFrameDataToArray(height * width, (UINT16 *)temp.data); //先把數據存入16位的圖像矩陣中
			temp.convertTo(img, CV_8UC1, 255.0 / 4500);   //再把16位轉換爲8位
			imshow("TEST", img);
		myFrame->Release();
		}
		if (waitKey(30) == VK_ESCAPE)
		break;
	}
	myReader->Release();        //釋放不用的變量並且關閉感應器
	mySource->Release();
	mySensor->Close();
	mySensor->Release();
	return  0;
}

回到程序中,按F5啓動調試,能看到深度圖像:
image.png
或者添加以下程序能夠獲取Kinect彩色圖,深度圖和紅外圖

#include <Kinect.h> 
#include <iostream>
#include <opencv2/core/core.hpp>  
#include <opencv2/highgui/highgui.hpp>  
using namespace cv;
using namespace std;
// 安全釋放指針
template<class Interface>
inline void SafeRelease(Interface *& pInterfaceToRelease)
{
	if (pInterfaceToRelease != NULL)
	{
		pInterfaceToRelease->Release();
		pInterfaceToRelease = NULL;
	}
}
int main()
{
	// 獲取Kinect設備
	IKinectSensor* m_pKinectSensor;
	HRESULT hr;
	hr = GetDefaultKinectSensor(&m_pKinectSensor);
	if (FAILED(hr))
	{
		return hr;
	}
	IMultiSourceFrameReader* m_pMultiFrameReader = NULL;
	if (m_pKinectSensor)
	{
		hr = m_pKinectSensor->Open();
		if (SUCCEEDED(hr))
		{
			// 獲取多數據源到讀取器  
			hr = m_pKinectSensor->OpenMultiSourceFrameReader(		FrameSourceTypes::FrameSourceTypes_Color |			FrameSourceTypes::FrameSourceTypes_Infrared |			FrameSourceTypes::FrameSourceTypes_Depth,
				&m_pMultiFrameReader);
		}
	}
	if (!m_pKinectSensor || FAILED(hr))
	{
		return E_FAIL;
	}
	// 三個數據幀及引用
	IDepthFrameReference* m_pDepthFrameReference = NULL;
	IColorFrameReference* m_pColorFrameReference = NULL;
	IInfraredFrameReference* m_pInfraredFrameReference = NULL;
	IInfraredFrame* m_pInfraredFrame = NULL;
	IDepthFrame* m_pDepthFrame = NULL;
	IColorFrame* m_pColorFrame = NULL;
	// 三個圖片格式
	Mat i_rgb(1080, 1920, CV_8UC4);      //注意:這裏必須爲4通道的圖,Kinect的數據只能以Bgra格式傳出
	Mat i_depth(424, 512, CV_8UC1);
	Mat i_ir(424, 512, CV_16UC1);
	UINT16 *depthData = new UINT16[424 * 512];
	IMultiSourceFrame* m_pMultiFrame = nullptr;
	while (true)
	{
		// 獲取新的一個多源數據幀
		hr = m_pMultiFrameReader->AcquireLatestFrame(&m_pMultiFrame);
		if (FAILED(hr) || !m_pMultiFrame)
		{
			//cout << "!!!" << endl;
			continue;
		}
		// 從多源數據幀中分離出彩色數據,深度數據和紅外數據
		if (SUCCEEDED(hr))
			hr = m_pMultiFrame->get_ColorFrameReference(&m_pColorFrameReference);
		if (SUCCEEDED(hr))
			hr = m_pColorFrameReference->AcquireFrame(&m_pColorFrame);
		if (SUCCEEDED(hr))
			hr = m_pMultiFrame->get_DepthFrameReference(&m_pDepthFrameReference);
		if (SUCCEEDED(hr))
			hr = m_pDepthFrameReference->AcquireFrame(&m_pDepthFrame);
		if (SUCCEEDED(hr))
			hr = m_pMultiFrame->get_InfraredFrameReference(&m_pInfraredFrameReference);
		if (SUCCEEDED(hr))
			hr = m_pInfraredFrameReference->AcquireFrame(&m_pInfraredFrame);
		// color拷貝到圖片中
		UINT nColorBufferSize = 1920 * 1080 * 4;
		if (SUCCEEDED(hr))
			hr = m_pColorFrame->CopyConvertedFrameDataToArray(nColorBufferSize, reinterpret_cast<BYTE*>(i_rgb.data), ColorImageFormat::ColorImageFormat_Bgra);
		// depth拷貝到圖片中
		if (SUCCEEDED(hr))
		{
			hr = m_pDepthFrame->CopyFrameDataToArray(424 * 512, depthData);
			for (int i = 0; i < 512 * 424; i++)
			{
				// 0-255深度圖,爲了顯示明顯,只取深度數據的低8位
				BYTE intensity = static_cast<BYTE>(depthData[i] % 256);
				reinterpret_cast<BYTE*>(i_depth.data)[i] = intensity;
			}
			// 實際是16位unsigned int數據
			//hr = m_pDepthFrame->CopyFrameDataToArray(424 * 512, reinterpret_cast<UINT16*>(i_depth.data));
		}
		// infrared拷貝到圖片中
		if (SUCCEEDED(hr))
		{
			hr = m_pInfraredFrame->CopyFrameDataToArray(424 * 512, reinterpret_cast<UINT16*>(i_ir.data));
		}
		// 顯示
		imshow("rgb", i_rgb);
		if (waitKey(1) == VK_ESCAPE)
			break;
		imshow("depth", i_depth);
		if (waitKey(1) == VK_ESCAPE)
			break;
		imshow("ir", i_ir);
		if (waitKey(1) == VK_ESCAPE)
			break;
		// 釋放資源
		SafeRelease(m_pColorFrame);
		SafeRelease(m_pDepthFrame);
		SafeRelease(m_pInfraredFrame);
		SafeRelease(m_pColorFrameReference);
		SafeRelease(m_pDepthFrameReference);
		SafeRelease(m_pInfraredFrameReference);
		SafeRelease(m_pMultiFrame);
	}
	// 關閉窗口,設備
	cv::destroyAllWindows();
	m_pKinectSensor->Close();
	std::system("pause");
	return 0;
}

5. PCL在VS中的配置

點雲官方網站
參考博客1
參考博客2
參考博客3
關於如何查找和利用PCL庫學習資源的一些心得

5.1 下載與安裝

VS默認配置32位工程的,下載32位的PCL庫,與參考博客一致,除了安裝路徑不一致。
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png

5.2 配置環境變量

安裝好PCL後系統自動幫我們配置好了下面
image.png
image.png
image.png

接下來和opencv的配置類似,添加如下到Path中:

%PCL_ROOT%\3rdParty\FLANN\bin
%PCL_ROOT%\3rdParty\Qhull\bin
%PCL_ROOT%\3rdParty\VTK\bin
E:\ruanjianbao\PCL\ProgramFile\PCL1.8.0\OpenNI2\Tools

image.png
添加完後重啓電腦(一定要記得重啓,不然就會出現我後面提到的找不到dll文件)

5.3 PCL在VS中的配置

新建項目pcl_test,同前
image.png
image.png
獲取E:\ruanjianbao\PCL\ProgramFile\PCL 1.8.0\3rdParty\VTK\lib下的所有靜態鏈接庫文件名並存儲至文本0.txt的技巧:

  • win+R
  • cmd→Enter
  • cd /d E:\ruanjianbao\PCL1.8\ProgramFile\PCL 1.8.0\3rdParty\VTK\lib →Enter
  • dir /b *.lib >0.txt→Enter
    image.png
    將下面所有依賴項添加進去
pcl_common_debug.lib
pcl_features_debug.lib
pcl_filters_debug.lib
pcl_io_debug.lib
pcl_io_ply_debug.lib
pcl_kdtree_debug.lib
pcl_keypoints_debug.lib
pcl_ml_debug.lib
pcl_octree_debug.lib
pcl_outofcore_debug.lib
pcl_people_debug.lib
pcl_recognition_debug.lib
pcl_registration_debug.lib
pcl_sample_consensus_debug.lib
pcl_search_debug.lib
pcl_segmentation_debug.lib
pcl_stereo_debug.lib
pcl_surface_debug.lib
pcl_tracking_debug.lib
pcl_visualization_debug.lib
libboost_atomic-vc120-mt-gd-1_59.lib
libboost_chrono-vc120-mt-gd-1_59.lib
libboost_container-vc120-mt-gd-1_59.lib
libboost_context-vc120-mt-gd-1_59.lib
libboost_coroutine-vc120-mt-gd-1_59.lib
libboost_date_time-vc120-mt-gd-1_59.lib
libboost_exception-vc120-mt-gd-1_59.lib
libboost_filesystem-vc120-mt-gd-1_59.lib
libboost_graph-vc120-mt-gd-1_59.lib
libboost_iostreams-vc120-mt-gd-1_59.lib
libboost_locale-vc120-mt-gd-1_59.lib
libboost_log-vc120-mt-gd-1_59.lib
libboost_log_setup-vc120-mt-gd-1_59.lib
libboost_math_c99-vc120-mt-gd-1_59.lib
libboost_math_c99f-vc120-mt-gd-1_59.lib
libboost_math_c99l-vc120-mt-gd-1_59.lib
libboost_math_tr1-vc120-mt-gd-1_59.lib
libboost_math_tr1f-vc120-mt-gd-1_59.lib
libboost_math_tr1l-vc120-mt-gd-1_59.lib
libboost_mpi-vc120-mt-gd-1_59.lib
libboost_prg_exec_monitor-vc120-mt-gd-1_59.lib
libboost_program_options-vc120-mt-gd-1_59.lib
libboost_random-vc120-mt-gd-1_59.lib
libboost_regex-vc120-mt-gd-1_59.lib
libboost_serialization-vc120-mt-gd-1_59.lib
libboost_signals-vc120-mt-gd-1_59.lib
libboost_system-vc120-mt-gd-1_59.lib
libboost_test_exec_monitor-vc120-mt-gd-1_59.lib
libboost_thread-vc120-mt-gd-1_59.lib
libboost_timer-vc120-mt-gd-1_59.lib
libboost_unit_test_framework-vc120-mt-gd-1_59.lib
libboost_wave-vc120-mt-gd-1_59.lib
libboost_wserialization-vc120-mt-gd-1_59.lib
flann-gd.lib
flann_cpp_s-gd.lib
flann_s-gd.lib
qhull-gd.lib
qhullcpp-gd.lib
qhullstatic-gd.lib
qhullstatic_r-gd.lib
qhull_p-gd.lib
qhull_r-gd.lib
vtkalglib-7.0-gd.lib
vtkChartsCore-7.0-gd.lib
vtkCommonColor-7.0-gd.lib
vtkCommonComputationalGeometry-7.0-gd.lib
vtkCommonCore-7.0-gd.lib
vtkCommonDataModel-7.0-gd.lib
vtkCommonExecutionModel-7.0-gd.lib
vtkCommonMath-7.0-gd.lib
vtkCommonMisc-7.0-gd.lib
vtkCommonSystem-7.0-gd.lib
vtkCommonTransforms-7.0-gd.lib
vtkDICOMParser-7.0-gd.lib
vtkDomainsChemistry-7.0-gd.lib
vtkDomainsChemistryOpenGL2-7.0-gd.lib
vtkexoIIc-7.0-gd.lib
vtkexpat-7.0-gd.lib
vtkFiltersAMR-7.0-gd.lib
vtkFiltersCore-7.0-gd.lib
vtkFiltersExtraction-7.0-gd.lib
vtkFiltersFlowPaths-7.0-gd.lib
vtkFiltersGeneral-7.0-gd.lib
vtkFiltersGeneric-7.0-gd.lib
vtkFiltersGeometry-7.0-gd.lib
vtkFiltersHybrid-7.0-gd.lib
vtkFiltersHyperTree-7.0-gd.lib
vtkFiltersImaging-7.0-gd.lib
vtkFiltersModeling-7.0-gd.lib
vtkFiltersParallel-7.0-gd.lib
vtkFiltersParallelImaging-7.0-gd.lib
vtkFiltersProgrammable-7.0-gd.lib
vtkFiltersSelection-7.0-gd.lib
vtkFiltersSMP-7.0-gd.lib
vtkFiltersSources-7.0-gd.lib
vtkFiltersStatistics-7.0-gd.lib
vtkFiltersTexture-7.0-gd.lib
vtkFiltersVerdict-7.0-gd.lib
vtkfreetype-7.0-gd.lib
vtkGeovisCore-7.0-gd.lib
vtkglew-7.0-gd.lib
vtkhdf5-7.0-gd.lib
vtkhdf5_hl-7.0-gd.lib
vtkImagingColor-7.0-gd.lib
vtkImagingCore-7.0-gd.lib
vtkImagingFourier-7.0-gd.lib
vtkImagingGeneral-7.0-gd.lib
vtkImagingHybrid-7.0-gd.lib
vtkImagingMath-7.0-gd.lib
vtkImagingMorphological-7.0-gd.lib
vtkImagingSources-7.0-gd.lib
vtkImagingStatistics-7.0-gd.lib
vtkImagingStencil-7.0-gd.lib
vtkInfovisCore-7.0-gd.lib
vtkInfovisLayout-7.0-gd.lib
vtkInteractionImage-7.0-gd.lib
vtkInteractionStyle-7.0-gd.lib
vtkInteractionWidgets-7.0-gd.lib
vtkIOAMR-7.0-gd.lib
vtkIOCore-7.0-gd.lib
vtkIOEnSight-7.0-gd.lib
vtkIOExodus-7.0-gd.lib
vtkIOExport-7.0-gd.lib
vtkIOGeometry-7.0-gd.lib
vtkIOImage-7.0-gd.lib
vtkIOImport-7.0-gd.lib
vtkIOInfovis-7.0-gd.lib
vtkIOLegacy-7.0-gd.lib
vtkIOLSDyna-7.0-gd.lib
vtkIOMINC-7.0-gd.lib
vtkIOMovie-7.0-gd.lib
vtkIONetCDF-7.0-gd.lib
vtkIOParallel-7.0-gd.lib
vtkIOParallelXML-7.0-gd.lib
vtkIOPLY-7.0-gd.lib
vtkIOSQL-7.0-gd.lib
vtkIOVideo-7.0-gd.lib
vtkIOXML-7.0-gd.lib
vtkIOXMLParser-7.0-gd.lib
vtkjpeg-7.0-gd.lib
vtkjsoncpp-7.0-gd.lib
vtklibxml2-7.0-gd.lib
vtkmetaio-7.0-gd.lib
vtkNetCDF-7.0-gd.lib
vtkNetCDF_cxx-7.0-gd.lib
vtkoggtheora-7.0-gd.lib
vtkParallelCore-7.0-gd.lib
vtkpng-7.0-gd.lib
vtkproj4-7.0-gd.lib
vtkRenderingAnnotation-7.0-gd.lib
vtkRenderingContext2D-7.0-gd.lib
vtkRenderingContextOpenGL2-7.0-gd.lib
vtkRenderingCore-7.0-gd.lib
vtkRenderingFreeType-7.0-gd.lib
vtkRenderingImage-7.0-gd.lib
vtkRenderingLabel-7.0-gd.lib
vtkRenderingLOD-7.0-gd.lib
vtkRenderingOpenGL2-7.0-gd.lib
vtkRenderingQt-7.0-gd.lib
vtkRenderingVolume-7.0-gd.lib
vtkRenderingVolumeOpenGL2-7.0-gd.lib
vtksqlite-7.0-gd.lib
vtksys-7.0-gd.lib
vtktiff-7.0-gd.lib
vtkverdict-7.0-gd.lib
vtkViewsContext2D-7.0-gd.lib
vtkViewsCore-7.0-gd.lib
vtkViewsInfovis-7.0-gd.lib
vtkViewsQt-7.0-gd.lib
vtkzlib-7.0-gd.lib

image.png
添加下面兩項

_SCL_SECURE_NO_WARNINGS 
_CRT_SECURE_NO_WARNINGS

image.png
:如果不執行這一項操作的話,就會出現後面提到的錯誤。

5.4 PCL的小測試

#include <pcl/visualization/cloud_viewer.h>  
#include <iostream>  
#include <pcl/io/io.h>  
#include <pcl/io/pcd_io.h>  

int user_data;

void viewerOneOff(pcl::visualization::PCLVisualizer& viewer)
{
	viewer.setBackgroundColor(1.0, 0.5, 1.0);
	pcl::PointXYZ o;
	o.x = 1.0;
	o.y = 0;
	o.z = 0;
	viewer.addSphere(o, 0.25, "sphere", 0);
	std::cout << "i only run once" << std::endl;

}

void viewerPsycho(pcl::visualization::PCLVisualizer& viewer)
{
	static unsigned count = 0;
	std::stringstream ss;
	ss << "Once per viewer loop: " << count++;
	viewer.removeShape("text", 0);
	viewer.addText(ss.str(), 200, 300, "text", 0);

	//FIXME: possible race condition here:  
	user_data++;
}

int main()
{
	pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZRGBA>);
	pcl::io::loadPCDFile("my_point_cloud.pcd", *cloud);

	pcl::visualization::CloudViewer viewer("Cloud Viewer");

	//blocks until the cloud is actually rendered  
	viewer.showCloud(cloud);

	//use the following functions to get access to the underlying more advanced/powerful  
	//PCLVisualizer  

	//This will only get called once  
	viewer.runOnVisualizationThreadOnce(viewerOneOff);

	//This will get called once per visualization iteration  
	viewer.runOnVisualizationThread(viewerPsycho);
	while (!viewer.wasStopped())
	{
		//you can also do cool processing here  
		//FIXME: Note that this is running in a separate thread from viewerPsycho  
		//and you should guard against race conditions yourself...  
		user_data++;
	}
	return 0;
}

image.png

5.5 PCL結合opencv在VS中讀取kinectv2點雲圖片

參考博客

#include <Kinect.h>
#include <opencv2\opencv.hpp>
#include <opencv2/highgui/highgui.hpp>

#include <pcl/visualization/cloud_viewer.h> 
#include <pcl/visualization/pcl_visualizer.h>
#include <pcl/point_cloud.h>
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>

using namespace cv;
using namespace std;
IKinectSensor* pSensor;
ICoordinateMapper *pMapper;
const int iDWidth = 512, iDHeight = 424;//深度圖尺寸
const int iCWidth = 1920, iCHeight = 1080;//彩色圖尺寸
CameraSpacePoint depth2xyz[iDWidth*iDHeight];
ColorSpacePoint depth2rgb[iCWidth*iCHeight];
void viewerOneOff(pcl::visualization::PCLVisualizer& viewer)
{
	viewer.setBackgroundColor(1.0, 1.0, 1.0);//設置背景顏色 
}
//啓動Kinect
bool initKinect()
{
	if (FAILED(GetDefaultKinectSensor(&pSensor))) return false;
	if (pSensor)
	{
		pSensor->get_CoordinateMapper(&pMapper);
		pSensor->Open();
		cout << "已打開相機" << endl;
		return true;
	}
	else return false;
}
//獲取深度幀
Mat DepthData()
{
	IDepthFrameSource* pFrameSource = nullptr;
	pSensor->get_DepthFrameSource(&pFrameSource);
	IDepthFrameReader* pFrameReader = nullptr;
	pFrameSource->OpenReader(&pFrameReader);
	IDepthFrame* pFrame = nullptr;
	Mat mDepthImg(iDHeight, iDWidth, CV_16UC1);
	while (true)
	{
		if (pFrameReader->AcquireLatestFrame(&pFrame) == S_OK)
		{
			pFrame->CopyFrameDataToArray(iDWidth * iDHeight, reinterpret_cast<UINT16*>(mDepthImg.data));
			cout << "已獲取深度幀" << endl;
			pFrame->Release();
			return mDepthImg;
			break;
		}
	}
}
//獲取彩色幀
Mat RGBData()
{
	IColorFrameSource* pFrameSource = nullptr;
	pSensor->get_ColorFrameSource(&pFrameSource);
	IColorFrameReader* pFrameReader = nullptr;
	pFrameSource->OpenReader(&pFrameReader);
	IColorFrame* pFrame = nullptr;
	Mat mColorImg(iCHeight, iCWidth, CV_8UC4);
	while (true)
	{
		if (pFrameReader->AcquireLatestFrame(&pFrame) == S_OK)
		{
			pFrame->CopyConvertedFrameDataToArray(iCWidth * iCHeight * 4, mColorImg.data, ColorImageFormat_Bgra);
			cout << "已獲取彩色幀" << endl;
			pFrame->Release();
			return mColorImg;
			break;
		}
	}
}
int main()
{
	initKinect();
	pcl::visualization::CloudViewer viewer("Cloud Viewer");
	viewer.runOnVisualizationThreadOnce(viewerOneOff);
	pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZRGBA>);
	Mat mColorImg;
	Mat mDepthImg;
	while (cv::waitKey(30) != VK_ESCAPE)
	{
		mColorImg = RGBData();
		mDepthImg = DepthData();
		imshow("RGB", mColorImg);
		pMapper->MapDepthFrameToColorSpace(iDHeight*iDWidth, reinterpret_cast<UINT16*>(mDepthImg.data), iDHeight*iDWidth, depth2rgb);//深度圖到顏色的映射
		pMapper->MapDepthFrameToCameraSpace(iDHeight*iDWidth, reinterpret_cast<UINT16*>(mDepthImg.data), iDHeight*iDWidth, depth2xyz);//深度圖到相機三維空間的映射
		//for (int i = 0; i < iDWidth*iDHeight; i++)
		//{
		//	cout << i << ":  " << "X=" << depth2rgb[i].X << ";  Y=" << depth2rgb[i].Y<<endl;
		//}
		float maxX = depth2xyz[0].X, maxY = depth2xyz[0].Y, maxZ = depth2xyz[0].Z;
		float minX = depth2xyz[0].X, minY = depth2xyz[0].Y, minZ = depth2xyz[0].Z;
		for (size_t i = 0; i < iDWidth; i++)
		{
			for (size_t j = 0; j < iDHeight; j++)
			{
				pcl::PointXYZRGBA pointTemp;
				if (depth2xyz[i + j*iDWidth].Z > 0.5&&depth2rgb[i + j*iDWidth].X<1920 && depth2rgb[i + j*iDWidth].X>0 && depth2rgb[i + j*iDWidth].Y<1080 && depth2rgb[i + j*iDWidth].Y>0)
				{
					pointTemp.x = -depth2xyz[i + j*iDWidth].X;
					if (depth2xyz[i + j*iDWidth].X > maxX) maxX = -depth2xyz[i + j*iDWidth].X;
					if (depth2xyz[i + j*iDWidth].X < minX) minX = -depth2xyz[i + j*iDWidth].X;
					pointTemp.y = depth2xyz[i + j*iDWidth].Y;
					if (depth2xyz[i + j*iDWidth].Y > maxY) maxY = depth2xyz[i + j*iDWidth].Y;
					if (depth2xyz[i + j*iDWidth].Y < minY) minY = depth2xyz[i + j*iDWidth].Y;
					pointTemp.z = depth2xyz[i + j*iDWidth].Z;
					if (depth2xyz[i + j*iDWidth].Z != 0.0)
					{
						if (depth2xyz[i + j*iDWidth].Z > maxZ) maxZ = depth2xyz[i + j*iDWidth].Z;
						if (depth2xyz[i + j*iDWidth].Z < minZ) minZ = depth2xyz[i + j*iDWidth].Z;
					}
					pointTemp.b = mColorImg.at<cv::Vec4b>(depth2rgb[i + j*iDWidth].Y, depth2rgb[i + j*iDWidth].X)[0];
					pointTemp.g = mColorImg.at<cv::Vec4b>(depth2rgb[i + j*iDWidth].Y, depth2rgb[i + j*iDWidth].X)[1];
					pointTemp.r = mColorImg.at<cv::Vec4b>(depth2rgb[i + j*iDWidth].Y, depth2rgb[i + j*iDWidth].X)[2];
					pointTemp.a = mColorImg.at<cv::Vec4b>(depth2rgb[i + j*iDWidth].Y, depth2rgb[i + j*iDWidth].X)[3];
					cloud->push_back(pointTemp);
				}
			}
		}
		viewer.showCloud(cloud);
		mColorImg.release();
		mDepthImg.release();
		pcl::io::savePCDFileASCII("cloud.pcd",*cloud);
		cloud->clear();
		waitKey(10);
	}
	return 0;
}

5.6 PCL結合opencv在VS中保存kinectv2點雲圖片

#include <Kinect.h>

#include<vector>

#include <opencv2\opencv.hpp>
#include <opencv2/highgui/highgui.hpp>

//#include <pcl/io/io.h>  
#include <pcl/io/pcd_io.h> 
#include <pcl/point_types.h>
#include <pcl/point_cloud.h>
#include <pcl/visualization/cloud_viewer.h> 
#include <pcl/visualization/pcl_visualizer.h>
#include <pcl/io/ply_io.h>

using namespace cv;
using namespace std;

//typedef pcl::PointXYZRGBA PointT;
//typedef pcl::PointCloud< PointT> PointCloud;

IKinectSensor* pSensor;
ICoordinateMapper *pMapper;
const int iDWidth = 512, iDHeight = 424;//深度圖尺寸
const int iCWidth = 1920, iCHeight = 1080;//彩色圖尺寸
CameraSpacePoint depth2xyz[iDWidth*iDHeight];
ColorSpacePoint depth2rgb[iCWidth*iCHeight];
void viewerOneOff(pcl::visualization::PCLVisualizer& viewer)
{
	viewer.setBackgroundColor(1.0, 1.0, 1.0);//設置背景顏色 
}
//啓動Kinect
bool initKinect()
{
	if (FAILED(GetDefaultKinectSensor(&pSensor))) return false;
	if (pSensor)
	{
		pSensor->get_CoordinateMapper(&pMapper);
		pSensor->Open();
		cout << "已打開相機" << endl;
		return true;
	}
	else return false;
}
//獲取深度幀
Mat DepthData()
{
	IDepthFrameSource* pFrameSource = nullptr;
	pSensor->get_DepthFrameSource(&pFrameSource);
	IDepthFrameReader* pFrameReader = nullptr;
	pFrameSource->OpenReader(&pFrameReader);
	IDepthFrame* pFrame = nullptr;
	Mat mDepthImg(iDHeight, iDWidth, CV_16UC1);
	while (true)
	{
		if (pFrameReader->AcquireLatestFrame(&pFrame) == S_OK)
		{
			pFrame->CopyFrameDataToArray(iDWidth * iDHeight, reinterpret_cast<UINT16*>(mDepthImg.data));
			cout << "已獲取深度幀" << endl;
			pFrame->Release();
			return mDepthImg;
			break;
		}
	}
}
//獲取彩色幀
Mat RGBData()
{
	IColorFrameSource* pFrameSource = nullptr;
	pSensor->get_ColorFrameSource(&pFrameSource);
	IColorFrameReader* pFrameReader = nullptr;
	pFrameSource->OpenReader(&pFrameReader);
	IColorFrame* pFrame = nullptr;
	Mat mColorImg(iCHeight, iCWidth, CV_8UC4);
	while (true)
	{
		if (pFrameReader->AcquireLatestFrame(&pFrame) == S_OK)
		{
			pFrame->CopyConvertedFrameDataToArray(iCWidth * iCHeight * 4, mColorImg.data, ColorImageFormat_Bgra);
			cout << "已獲取彩色幀" << endl;
			pFrame->Release();
			return mColorImg;
			break;
		}
	}
}
int main(int argc, char** argv)
{
	initKinect();
	pcl::visualization::CloudViewer viewer("Cloud Viewer");
	viewer.runOnVisualizationThreadOnce(viewerOneOff);
	pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZRGBA>);
	//PointCloud::Ptr cloud(new PointCloud);
	Mat mColorImg;
	Mat mDepthImg;
	while (cv::waitKey(30) != VK_ESCAPE)
	{
		mColorImg = RGBData();
		mDepthImg = DepthData();
		imwrite("color.jpg", mColorImg);
		imshow("RGB", mColorImg);
		imwrite("depth.jpg", mDepthImg);
		imshow("Depth", mDepthImg);
		pMapper->MapDepthFrameToColorSpace(iDHeight*iDWidth, reinterpret_cast<UINT16*>(mDepthImg.data), iDHeight*iDWidth, depth2rgb);//深度圖到顏色的映射
		pMapper->MapDepthFrameToCameraSpace(iDHeight*iDWidth, reinterpret_cast<UINT16*>(mDepthImg.data), iDHeight*iDWidth, depth2xyz);//深度圖到相機三維空間的映射
		//for (int i = 0; i < iDWidth*iDHeight; i++)
		//{
		//	cout << i << ":  " << "X=" << depth2rgb[i].X << ";  Y=" << depth2rgb[i].Y<<endl;
		//}
		float maxX = depth2xyz[0].X, maxY = depth2xyz[0].Y, maxZ = depth2xyz[0].Z;
		float minX = depth2xyz[0].X, minY = depth2xyz[0].Y, minZ = depth2xyz[0].Z;
		//pcl::PointCloud<pcl::PointXYZ> savecloud;
		//savecloud.width = 512;
		//savecloud.height = 424;
		//savecloud.is_dense = false;
		//savecloud.points.resize(savecloud.width * savecloud.height);
		for (size_t i = 0; i < iDWidth; i++)
		{
			for (size_t j = 0; j < iDHeight; j++)
			{
				pcl::PointXYZRGBA pointTemp;
				//PointT pointTemp;
				if (depth2xyz[i + j*iDWidth].Z > 0.5&&depth2rgb[i + j*iDWidth].X<1920 && depth2rgb[i + j*iDWidth].X>0 && depth2rgb[i + j*iDWidth].Y<1080 && depth2rgb[i + j*iDWidth].Y>0)
				{
					pointTemp.x = -depth2xyz[i + j*iDWidth].X;
					//savecloud[i,j].x = -depth2xyz[i + j*iDWidth].X;
					if (depth2xyz[i + j*iDWidth].X > maxX) maxX = -depth2xyz[i + j*iDWidth].X;
					if (depth2xyz[i + j*iDWidth].X < minX) minX = -depth2xyz[i + j*iDWidth].X;
					//savecloud[i, j].y = depth2xyz[i + j*iDWidth].Y;
					pointTemp.y = depth2xyz[i + j*iDWidth].Y;
					if (depth2xyz[i + j*iDWidth].Y > maxY) maxY = depth2xyz[i + j*iDWidth].Y;
					if (depth2xyz[i + j*iDWidth].Y < minY) minY = depth2xyz[i + j*iDWidth].Y;
					//savecloud[i, j].z = depth2xyz[i + j*iDWidth].Z;
					pointTemp.z = depth2xyz[i + j*iDWidth].Z;
					if (depth2xyz[i + j*iDWidth].Z != 0.0)
					{
						if (depth2xyz[i + j*iDWidth].Z > maxZ) maxZ = depth2xyz[i + j*iDWidth].Z;
						if (depth2xyz[i + j*iDWidth].Z < minZ) minZ = depth2xyz[i + j*iDWidth].Z;
					}
					pointTemp.b = mColorImg.at<cv::Vec4b>(depth2rgb[i + j*iDWidth].Y, depth2rgb[i + j*iDWidth].X)[0];
					pointTemp.g = mColorImg.at<cv::Vec4b>(depth2rgb[i + j*iDWidth].Y, depth2rgb[i + j*iDWidth].X)[1];
					pointTemp.r = mColorImg.at<cv::Vec4b>(depth2rgb[i + j*iDWidth].Y, depth2rgb[i + j*iDWidth].X)[2];
					pointTemp.a = mColorImg.at<cv::Vec4b>(depth2rgb[i + j*iDWidth].Y, depth2rgb[i + j*iDWidth].X)[3];
					cloud->push_back(pointTemp);
				}

			}
			//cloud->height = 1;
			//cloud->width = cloud->points.size();
			//cloud->is_dense = false;
			//pcl::io::savePCDFileASCII("pointcloud.pcd", *cloud);
			//cout << "point cloud size = " << cloud->points.size() << endl;
			//pcl::io::savePCDFileASCII("test_pcd.pcd", savecloud);	
		}
		pcl::io::savePCDFileASCII("cloud.pcd", *cloud);
		viewer.showCloud(cloud);
		//std::cerr << "Saved " << savecloud.points.size() << " data points to test_pcd.pcd." << std::endl;
		//for (size_t i = 0; i < savecloud.points.size(); ++i)
			//std::cerr << "    " << savecloud.points[i].x << " " << savecloud.points[i].y << " " << savecloud.points[i].z << std::endl;
		//std::cerr << "Saved " << savecloud.points.size() << " data points to test_pcd.pcd." << std::endl;
		//pcl::io::savePCDFileASCII("cloud.pcd", *cloud);
		mColorImg.release();
		mDepthImg.release();	
		//pcl::io::savePCDFileASCII("cloud.pcd", *cloud);
		//pcl::PLYWriter writer;
		//writer.write("cloud.ply", *cloud);
		cloud->clear();
		waitKey(3000);
	}
	return 0;
}

進入工程目錄下即可看到保存下來的RGB、深度、點雲圖片
image.png

6. VS2013結合pcl在保存點雲過程遇到的一系列問題

問題一

無法打開項目也無法新建項目
“no editor option definition export found for the given option name”
  • 解決方法:
    • 關閉VS
    • 清理C盤用戶文件目錄下的緩存,我的路徑如下:
      C:\用戶\YQ\AppData\Local\Microsoft\VisualStudio\12.0\ComponentModelCache
    • 重新打開VS
      注:
      visual studio 2012 對應 11.0文件夾;
      visual studio 2013 對應 12.0文件夾;

問題二
image.png

  • 解決方法:
    • 雙擊warning
    • 文件→高級保存選項→將編碼那一欄改爲下圖→點擊確定
      image.png

問題三
image.png

  • 解決方法:
    • boost官網找到你需要的boost版本並下載(我的是1.5.9)
    • 解壓
    • 打開VS的Visual Studio Tools,選擇X86命令提示(因爲我一直使用32位的VS項目)
    • 到解壓目錄下→進入boost_1_59_0目錄(cd /d E:\ruanjianbao\PCL\ProgramFile\boost_1_59_0)→運行bootstrap.bat文件(start bootstrap.bat)
      image.png
    • 運行完bootstrap.bat文件後生成b2.exe和bjam.exe→執行下面命令編譯b2.exe生成32位的靜態lib
b2 stage --toolset=msvc-12.0 architecture=x86 --stagedir=".\lib\vc12_x86" link=static runtime-link=static threading=multi debug releas
  • 在VS中重新配置PCL,步驟和前面PCL的配置相同,只是需要將有關boost的內容都換成重新下載的boost裏面生成的lib和include
    附:
    利用cmd然後cd到指定文件夾的小技巧:
    找到你需要定位的文件夾→單擊該文件夾→按Shift不動同時鼠標右擊,會出現“複製爲路徑”這個選項,然後粘貼到你想要的地方即可。

問題四
image.png

  • 解決方法:待定
    注:
    多線程調試Dll (/MDd) MD_DynamicDebug
    多線程Dll (/MD) :MD_DynamicRelease
    多線程(/MT) :MD_StaticRelease
    多線程(/MTd):MD_StaticDebug

問題五
image.png

  • 解決方法:
    添加下面語句到圖片所示
_SCL_SECURE_NO_WARNINGS

image.png

問題六
image.png
image.png
image.png
image.png

  • 原因:
    發生這種問題的根本原因在於環境變量的設置上,計算機只會在path下包含的目錄裏去尋找程序所要運行的.dll文件,若我們所要使用到的.dll文件沒有包含在環境變量path中,則會發生錯誤:計算機丟失xxx.dll文件。不過自己之前在環境配置中明明配置好了,卻出現了這種錯誤,真正在於自己環境配置完後沒有重啓電腦,導致環境變量的配置沒有立即生效。
  • 解決方法:
    重啓電腦後問題解決

問題七
image.png

  • 原因:
    ①值“0”不匹配值“2”,Debug使用了Release下的庫文件。
    ②值“2”不匹配值“0”,Release使用了Debug下的庫文件
  • 解決方法:
    情況一:項目->屬性->配置屬性->C/C+±>預處理器->預處理定義中添加:_ITERATOR_DEBUG_LEVEL=0
    情況二:項目->屬性->配置屬性->C/C+±>預處理器->預處理定義中添加:_ITERATOR_DEBUG_LEVEL=2

問題八
image.png
添加:/NODEFAULTLIB:msvcrt.lib
image.png

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章