https://www.cnblogs.com/li-yao7758258/p/6476046.html
(1)點雲到深度圖與可視化的實現
區分點雲與深度圖本質的區別
1.深度圖像也叫距離影像,是指將從圖像採集器到場景中各點的距離(深度)值作爲像素值的圖像。獲取方法有:激光雷達深度成像法、計算機立體視覺成像、座標測量機法、莫爾條紋法、結構光法。
2.點雲:當一束激光照射到物體表面時,所反射的激光會攜帶方位、距離等信息。若將激光束按照某種軌跡進行掃描,便會邊掃描邊記錄到反射的激光點信息,由 於掃描極爲精細,則能夠得到大量的激光點,因而就可形成激光點雲。點雲格式有*.las ;*.pcd; *.txt等。
深度圖像經過座標轉換可以計算爲點雲數據;有規則及必要信息的點雲數據可以反算爲深度圖像
rangeimage是來自傳感器一個特定角度拍攝的一個三維場景獲取的有規則的有焦距等基本信息的深度圖。
深度圖像的像素值代表從傳感器到物體的距離或者深度值。
RangeImage類的繼承於PointCloud主要的功能實現一個特定的視點得到的一個三維場景的深度圖像,繼承關係爲
所以我們知道有規則及必要信息就可以反算爲深度圖像。那麼我們就可以直接創建一個有序的規則的點雲,比如一張平面,或者我們直接使用Kinect獲取的點雲來可視化深度的圖,所以首先分析程序中是如果實現的點雲到深度圖的轉變的,(程序的註釋是我自己的理解,註釋的比較詳細)
#include <iostream> #include <boost/thread/thread.hpp> #include <pcl/common/common_headers.h> #include <pcl/range_image/range_image.h> //關於深度圖像的頭文件 #include <pcl/io/pcd_io.h> #include <pcl/visualization/range_image_visualizer.h> //深度圖可視化的頭文件 #include <pcl/visualization/pcl_visualizer.h> //PCL可視化的頭文件 #include <pcl/console/parse.h> typedef pcl::PointXYZ PointType; //參數 float angular_resolution_x = 0.5f,//angular_resolution爲模擬的深度傳感器的角度分辨率,即深度圖像中一個像素對應的角度大小 angular_resolution_y = angular_resolution_x; pcl::RangeImage::CoordinateFrame coordinate_frame = pcl::RangeImage::CAMERA_FRAME;//深度圖像遵循座標系統 bool live_update = false; //命令幫助提示 void printUsage (const char* progName) { std::cout << "\n\nUsage: "<<progName<<" [options] <scene.pcd>\n\n" << "Options:\n" << "-------------------------------------------\n" << "-rx <float> angular resolution in degrees (default "<<angular_resolution_x<<")\n" << "-ry <float> angular resolution in degrees (default "<<angular_resolution_y<<")\n" << "-c <int> coordinate frame (default "<< (int)coordinate_frame<<")\n" << "-l live update - update the range image according to the selected view in the 3D viewer.\n" << "-h this help\n" << "\n\n"; } void setViewerPose (pcl::visualization::PCLVisualizer& viewer, const Eigen::Affine3f& viewer_pose) { Eigen::Vector3f pos_vector = viewer_pose * Eigen::Vector3f(0, 0, 0); Eigen::Vector3f look_at_vector = viewer_pose.rotation () * Eigen::Vector3f(0, 0, 1) + pos_vector; Eigen::Vector3f up_vector = viewer_pose.rotation () * Eigen::Vector3f(0, -1, 0); viewer.setCameraPosition (pos_vector[0], pos_vector[1], pos_vector[2], look_at_vector[0], look_at_vector[1], look_at_vector[2], up_vector[0], up_vector[1], up_vector[2]); } //主函數 int main (int argc, char** argv) { //輸入命令分析 if (pcl::console::find_argument (argc, argv, "-h") >= 0) { printUsage (argv[0]); return 0; } if (pcl::console::find_argument (argc, argv, "-l") >= 0) { live_update = true; std::cout << "Live update is on.\n"; } if (pcl::console::parse (argc, argv, "-rx", angular_resolution_x) >= 0) std::cout << "Setting angular resolution in x-direction to "<<angular_resolution_x<<"deg.\n"; if (pcl::console::parse (argc, argv, "-ry", angular_resolution_y) >= 0) std::cout << "Setting angular resolution in y-direction to "<<angular_resolution_y<<"deg.\n"; int tmp_coordinate_frame; if (pcl::console::parse (argc, argv, "-c", tmp_coordinate_frame) >= 0) { coordinate_frame = pcl::RangeImage::CoordinateFrame (tmp_coordinate_frame); std::cout << "Using coordinate frame "<< (int)coordinate_frame<<".\n"; } angular_resolution_x = pcl::deg2rad (angular_resolution_x); angular_resolution_y = pcl::deg2rad (angular_resolution_y); //讀取點雲PCD文件 如果沒有輸入PCD文件就生成一個點雲 pcl::PointCloud<PointType>::Ptr point_cloud_ptr (new pcl::PointCloud<PointType>); pcl::PointCloud<PointType>& point_cloud = *point_cloud_ptr; Eigen::Affine3f scene_sensor_pose (Eigen::Affine3f::Identity ()); //申明傳感器的位置是一個4*4的仿射變換 std::vector<int> pcd_filename_indices = pcl::console::parse_file_extension_argument (argc, argv, "pcd"); if (!pcd_filename_indices.empty ()) { std::string filename = argv[pcd_filename_indices[0]]; if (pcl::io::loadPCDFile (filename, point_cloud) == -1) { std::cout << "Was not able to open file \""<<filename<<"\".\n"; printUsage (argv[0]); return 0; } //給傳感器的位姿賦值 就是獲取點雲的傳感器的的平移與旋轉的向量 scene_sensor_pose = Eigen::Affine3f (Eigen::Translation3f (point_cloud.sensor_origin_[0], point_cloud.sensor_origin_[1], point_cloud.sensor_origin_[2])) * Eigen::Affine3f (point_cloud.sensor_orientation_); } else { //如果沒有給點雲,則我們要自己生成點雲 std::cout << "\nNo *.pcd file given => Genarating example point cloud.\n\n"; for (float x=-0.5f; x<=0.5f; x+=0.01f) { for (float y=-0.5f; y<=0.5f; y+=0.01f) { PointType point; point.x = x; point.y = y; point.z = 2.0f - y; point_cloud.points.push_back (point); } } point_cloud.width = (int) point_cloud.points.size (); point_cloud.height = 1; } // -----從創建的點雲中獲取深度圖--// //設置基本參數 float noise_level = 0.0; float min_range = 0.0f; int border_size = 1; boost::shared_ptr<pcl::RangeImage> range_image_ptr(new pcl::RangeImage); pcl::RangeImage& range_image = *range_image_ptr; /* 關於range_image.createFromPointCloud()參數的解釋 (涉及的角度都爲弧度爲單位) : point_cloud爲創建深度圖像所需要的點雲 angular_resolution_x深度傳感器X方向的角度分辨率 angular_resolution_y深度傳感器Y方向的角度分辨率 pcl::deg2rad (360.0f)深度傳感器的水平最大采樣角度 pcl::deg2rad (180.0f)垂直最大采樣角度 scene_sensor_pose設置的模擬傳感器的位姿是一個仿射變換矩陣,默認爲4*4的單位矩陣變換 coordinate_frame定義按照那種座標系統的習慣 默認爲CAMERA_FRAME noise_level 獲取深度圖像深度時,鄰近點對查詢點距離值的影響水平 min_range 設置最小的獲取距離,小於最小的獲取距離的位置爲傳感器的盲區 border_size 設置獲取深度圖像邊緣的寬度 默認爲0 */ range_image.createFromPointCloud (point_cloud, angular_resolution_x, angular_resolution_y,pcl::deg2rad (360.0f), pcl::deg2rad (180.0f),scene_sensor_pose, coordinate_frame, noise_level, min_range, border_size); //可視化點雲 pcl::visualization::PCLVisualizer viewer ("3D Viewer"); viewer.setBackgroundColor (1, 1, 1); pcl::visualization::PointCloudColorHandlerCustom<pcl::PointWithRange> range_image_color_handler (range_image_ptr, 0, 0, 0); viewer.addPointCloud (range_image_ptr, range_image_color_handler, "range image"); viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 1, "range image"); //viewer.addCoordinateSystem (1.0f, "global"); //PointCloudColorHandlerCustom<PointType> point_cloud_color_handler (point_cloud_ptr, 150, 150, 150); //viewer.addPointCloud (point_cloud_ptr, point_cloud_color_handler, "original point cloud"); viewer.initCameraParameters (); //range_image.getTransformationToWorldSystem ()的作用是獲取從深度圖像座標系統(應該就是傳感器的座標)轉換爲世界座標系統的轉換矩陣 setViewerPose(viewer, range_image.getTransformationToWorldSystem ()); //設置視點的位置 //可視化深度圖 pcl::visualization::RangeImageVisualizer range_image_widget ("Range image"); range_image_widget.showRangeImage (range_image); while (!viewer.wasStopped ()) { range_image_widget.spinOnce (); viewer.spinOnce (); pcl_sleep (0.01); if (live_update) { //如果選擇的是——l的參數說明就是要根據自己選擇的視點來創建深度圖。 // live update - update the range image according to the selected view in the 3D viewer. scene_sensor_pose = viewer.getViewerPose(); range_image.createFromPointCloud (point_cloud, angular_resolution_x, angular_resolution_y, pcl::deg2rad (360.0f), pcl::deg2rad (180.0f), scene_sensor_pose, pcl::RangeImage::LASER_FRAME, noise_level, min_range, border_size); range_image_widget.showRangeImage (range_image); } } }
在代碼利解釋的十分詳細,編譯查看結果
沒有輸入PCD點雲文件的結果
輸入點雲的原始圖
輸入的結果及其深度圖
(2)如何從深度圖像中提取邊界
從深度圖像中提取邊界(從前景跨越到背景的位置定義爲邊界),對於物體邊界:這是物體的最外層和陰影邊界的可見點集,陰影邊界:毗鄰與遮擋的背景上的點集,Veil點集,在被遮擋物邊界和陰影邊界之間的內插點,它們是有激光雷達獲取的3D距離數據中的典型數據類型,這三類數據及深度圖像的邊界如圖:
代碼解析:從磁盤中讀取點雲,創建深度圖像並使其可視化,提取邊界信息很重要的一點就是區分深度圖像中當前視點不可見點幾何和應該可見但處於傳感器獲取距離範圍之外的點集 ,後者可以標記爲典型邊界,然而當前視點不可見點則不能成爲邊界,因此,如果後者的測量值存在,則提供那些超出傳感器距離獲取範圍之外的數據對於邊界的提取是非常重要的,
新建文件range_image_border_extraction.cpp:
#include <iostream> #include <boost/thread/thread.hpp> #include <pcl/range_image/range_image.h> #include <pcl/io/pcd_io.h> #include <pcl/visualization/range_image_visualizer.h> #include <pcl/visualization/pcl_visualizer.h> #include <pcl/features/range_image_border_extractor.h> #include <pcl/console/parse.h> typedef pcl::PointXYZ PointType; // -------------------- // -----Parameters----- // -------------------- float angular_resolution = 0.5f; pcl::RangeImage::CoordinateFrame coordinate_frame = pcl::RangeImage::CAMERA_FRAME; bool setUnseenToMaxRange = false; // -------------- // -----Help----- // -------------- void printUsage (const char* progName) { std::cout << "\n\nUsage: "<<progName<<" [options] <scene.pcd>\n\n" << "Options:\n" << "-------------------------------------------\n" << "-r <float> angular resolution in degrees (default "<<angular_resolution<<")\n" << "-c <int> coordinate frame (default "<< (int)coordinate_frame<<")\n" << "-m Treat all unseen points to max range\n" << "-h this help\n" << "\n\n"; } // -------------- // -----Main----- // -------------- int main (int argc, char** argv) { // -------------------------------------- // -----Parse Command Line Arguments----- // -------------------------------------- if (pcl::console::find_argument (argc, argv, "-h") >= 0) { printUsage (argv[0]); return 0; } if (pcl::console::find_argument (argc, argv, "-m") >= 0) { setUnseenToMaxRange = true; cout << "Setting unseen values in range image to maximum range readings.\n"; } int tmp_coordinate_frame; if (pcl::console::parse (argc, argv, "-c", tmp_coordinate_frame) >= 0) { coordinate_frame = pcl::RangeImage::CoordinateFrame (tmp_coordinate_frame); cout << "Using coordinate frame "<< (int)coordinate_frame<<".\n"; } if (pcl::console::parse (argc, argv, "-r", angular_resolution) >= 0) cout << "Setting angular resolution to "<<angular_resolution<<"deg.\n"; angular_resolution = pcl::deg2rad (angular_resolution); // ------------------------------------------------------------------ // -----Read pcd file or create example point cloud if not given----- // ------------------------------------------------------------------ pcl::PointCloud<PointType>::Ptr point_cloud_ptr (new pcl::PointCloud<PointType>); pcl::PointCloud<PointType>& point_cloud = *point_cloud_ptr; pcl::PointCloud<pcl::PointWithViewpoint> far_ranges; Eigen::Affine3f scene_sensor_pose (Eigen::Affine3f::Identity ()); //傳感器的位置 std::vector<int> pcd_filename_indices = pcl::console::parse_file_extension_argument (argc, argv, "pcd"); if (!pcd_filename_indices.empty ()) { std::string filename = argv[pcd_filename_indices[0]]; if (pcl::io::loadPCDFile (filename, point_cloud) == -1) //打開文件 { cout << "Was not able to open file \""<<filename<<"\".\n"; printUsage (argv[0]); return 0; } scene_sensor_pose = Eigen::Affine3f (Eigen::Translation3f (point_cloud.sensor_origin_[0], point_cloud.sensor_origin_[1], point_cloud.sensor_origin_[2])) * Eigen::Affine3f (point_cloud.sensor_orientation_); //仿射變換矩陣 std::string far_ranges_filename = pcl::getFilenameWithoutExtension (filename)+"_far_ranges.pcd"; if (pcl::io::loadPCDFile(far_ranges_filename.c_str(), far_ranges) == -1) std::cout << "Far ranges file \""<<far_ranges_filename<<"\" does not exists.\n"; } else { cout << "\nNo *.pcd file given => Genarating example point cloud.\n\n"; for (float x=-0.5f; x<=0.5f; x+=0.01f) //填充一個矩形的點雲 { for (float y=-0.5f; y<=0.5f; y+=0.01f) { PointType point; point.x = x; point.y = y; point.z = 2.0f - y; point_cloud.points.push_back (point); } } point_cloud.width = (int) point_cloud.points.size (); point_cloud.height = 1; } // ----------------------------------------------- // -----Create RangeImage from the PointCloud----- // ----------------------------------------------- float noise_level = 0.0; //各種參數的設置 float min_range = 0.0f; int border_size = 1; boost::shared_ptr<pcl::RangeImage> range_image_ptr (new pcl::RangeImage); pcl::RangeImage& range_image = *range_image_ptr; range_image.createFromPointCloud (point_cloud, angular_resolution, pcl::deg2rad (360.0f), pcl::deg2rad (180.0f), scene_sensor_pose, coordinate_frame, noise_level, min_range, border_size); range_image.integrateFarRanges (far_ranges); if (setUnseenToMaxRange) range_image.setUnseenToMaxRange (); // -------------------------------------------- // -----Open 3D viewer and add point cloud----- // -------------------------------------------- pcl::visualization::PCLVisualizer viewer ("3D Viewer"); //創建視口 viewer.setBackgroundColor (1, 1, 1); //設置背景顏色 viewer.addCoordinateSystem (1.0f); //設置座標系 pcl::visualization::PointCloudColorHandlerCustom<PointType> point_cloud_color_handler (point_cloud_ptr, 0, 0, 0); viewer.addPointCloud (point_cloud_ptr, point_cloud_color_handler, "original point cloud"); //添加點雲 //PointCloudColorHandlerCustom<pcl::PointWithRange> range_image_color_handler (range_image_ptr, 150, 150, 150); //viewer.addPointCloud (range_image_ptr, range_image_color_handler, "range image"); //viewer.setPointCloudRenderingProperties (PCL_VISUALIZER_POINT_SIZE, 2, "range image"); // ------------------------- // -----Extract borders提取邊界的部分----- // ------------------------- pcl::RangeImageBorderExtractor border_extractor (&range_image); pcl::PointCloud<pcl::BorderDescription> border_descriptions; border_extractor.compute (border_descriptions); //提取邊界計算描述子 // ------------------------------------------------------- // -----Show points in 3D viewer在3D 視口中顯示點雲----- // ---------------------------------------------------- pcl::PointCloud<pcl::PointWithRange>::Ptr border_points_ptr(new pcl::PointCloud<pcl::PointWithRange>), //物體邊界 veil_points_ptr(new pcl::PointCloud<pcl::PointWithRange>), //veil邊界 shadow_points_ptr(new pcl::PointCloud<pcl::PointWithRange>); //陰影邊界 pcl::PointCloud<pcl::PointWithRange>& border_points = *border_points_ptr, & veil_points = * veil_points_ptr, & shadow_points = *shadow_points_ptr; for (int y=0; y< (int)range_image.height; ++y) { for (int x=0; x< (int)range_image.width; ++x) { if (border_descriptions.points[y*range_image.width + x].traits[pcl::BORDER_TRAIT__OBSTACLE_BORDER]) border_points.points.push_back (range_image.points[y*range_image.width + x]); if (border_descriptions.points[y*range_image.width + x].traits[pcl::BORDER_TRAIT__VEIL_POINT]) veil_points.points.push_back (range_image.points[y*range_image.width + x]); if (border_descriptions.points[y*range_image.width + x].traits[pcl::BORDER_TRAIT__SHADOW_BORDER]) shadow_points.points.push_back (range_image.points[y*range_image.width + x]); } } pcl::visualization::PointCloudColorHandlerCustom<pcl::PointWithRange> border_points_color_handler (border_points_ptr, 0, 255, 0); viewer.addPointCloud<pcl::PointWithRange> (border_points_ptr, border_points_color_handler, "border points"); viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 7, "border points"); pcl::visualization::PointCloudColorHandlerCustom<pcl::PointWithRange> veil_points_color_handler (veil_points_ptr, 255, 0, 0); viewer.addPointCloud<pcl::PointWithRange> (veil_points_ptr, veil_points_color_handler, "veil points"); viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 7, "veil points"); pcl::visualization::PointCloudColorHandlerCustom<pcl::PointWithRange> shadow_points_color_handler (shadow_points_ptr, 0, 255, 255); viewer.addPointCloud<pcl::PointWithRange> (shadow_points_ptr, shadow_points_color_handler, "shadow points"); viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 7, "shadow points"); //------------------------------------- // -----Show points on range image----- // ------------------------------------ pcl::visualization::RangeImageVisualizer* range_image_borders_widget = NULL; range_image_borders_widget = pcl::visualization::RangeImageVisualizer::getRangeImageBordersWidget (range_image, -std::numeric_limits<float>::infinity (), std::numeric_limits<float>::infinity (), false, border_descriptions, "Range image with borders"); // ------------------------------------- //-------------------- // -----Main loop----- //-------------------- while (!viewer.wasStopped ()) { range_image_borders_widget->spinOnce (); viewer.spinOnce (); pcl_sleep(0.01); } }
編譯的運行的結果./range_image_border_extraction -m
這將一個自定生成的,矩形狀浮點型點雲,有顯示結果可以看出檢測出的邊界用綠色較大的點表示,其他的點用默認的普通的大小點來表示.
因爲有人問我爲什麼使用其他的PCD文件來運行這個程序的時候會提示 Far ranges file far_ranges_filename does not exists 這是因爲在深度傳感器得帶深度圖像並可視化圖像的時候,我們都知道傳感器的測量距離受硬件的限制,所以在這裏就是要定義傳感器看不到的距離,所以當我們自己使用kinect獲取深度圖像運行這個程序的時候直接使用命令
./range_image_border_extraction -m out0.pcd 使用-m的原因是要設置傳感器看不見的位置 Setting unseen values in range image to maximum range readings
那麼對於我們使用自己使用的PCD文件並且設置 -m後的一個結果比如
參考資料:
PCL RangeImage Lib http://docs.pointclouds.org/trunk/classpcl_1_1_range_image.html#a8b5785b0499f0a70d5c87fceba55992f
PCL PointCloud Lib http://docs.pointclouds.org/trunk/classpcl_1_1_point_cloud.html#a73d13cc1d5faae4e57773b53d01844b7