安裝配置Intel Realsense D435i並在D435i上跑ORB-SLAM2



一、安裝Intel Realsense SDK

方式一 —— 直接安裝官方已經編譯好的包,可參考教程

// Register the server's public key
sudo apt-key adv --keyserver keys.gnupg.net --recv-key C8B3A55A6F3EFCDE || sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-key C8B3A55A6F3EFCDE
// Add the server to the list of repositories
sudo add-apt-repository "deb http://realsense-hw-public.s3.amazonaws.com/Debian/apt-repo xenial main" -u
// Install the libraries
sudo apt-get install librealsense2-dkms
sudo apt-get install librealsense2-utils
// Optionally install the developer and debug packages
sudo apt-get install librealsense2-dev
sudo apt-get install librealsense2-dbg
// Reconnect the Intel RealSense depth camera and run: realsense-viewer to verify the installation.

方式二 —— 下載源代碼自行編譯安裝,可參考教程

// ***************************Prerequisites*****************************
// Update Ubuntu distribution, including getting the latest stable kernel
sudo apt-get update && sudo apt-get upgrade && sudo apt-get dist-upgrade
// Download the complete source tree with git
git clone https://github.com/IntelRealSense/librealsense.git
// Unplug any connected Intel RealSense camera, Navigate to librealsense root directory.
// Install the core packages required to build librealsense binaries and the affected kernel modules
sudo apt-get install git libssl-dev libusb-1.0-0-dev pkg-config libgtk-3-dev 
sudo apt-get install libglfw3-dev libgl1-mesa-dev libglu1-mesa-dev
// certain librealsense CMAKE flags (e.g. CUDA) require version 3.8+ which is currently not made available via apt manager for Ubuntu LTS.
// 卸載自帶的舊版本cmake並通過下載源代碼安裝新版本cmake,下載網址:https://cmake.org/download/,下載對應的源代碼後進行解壓後綴名爲tar.gz的壓縮包,進入到cmake源代碼的根目錄
./bootstrap
make
sudo make install
// Run Intel Realsense permissions script located from librealsense root directory
./scripts/setup_udev_rules.sh
// Build and apply patched kernel modules
./scripts/patch-realsense-ubuntu-lts.sh
// Tracking Module requires hid_sensor_custom kernel module to operate properly.
echo 'hid_sensor_custom' | sudo tee -a /etc/modules


// ***************************Building librealsense2 SDK*****************************
// Navigate to librealsense root directory
mkdir build && cd build
// Builds librealsense along with the demos and tutorials
cmake ../ -DBUILD_EXAMPLES=true
// Recompile and install librealsense binaries
sudo make uninstall && make clean && make **-jX** && sudo make install
//  Use make -jX for parallel compilation, where X stands for the number of CPU cores available
// The shared object will be installed in /usr/local/lib, header files in /usr/local/include
// The binary demos, tutorials and test files will be copied into /usr/local/bin
// 進入/librealsense/build/examples/capture,試一下效果
./rs-capture 

二、安裝ROS Wrapper for Intel RealSense

可參考教程

// Step 1: Install the latest Intel RealSense SDK 2.0(我們文章第一部分已經操作完成)


// Step 2: Install the ROS distribution,e.g. Install ROS Kinetic:http://wiki.ros.org/kinetic/Installation/Ubuntu
// 設置你的鏡像源列表,第一個爲官方提供;國內用戶可以使用第二個中科大或者第三個清華的(三選一即可)
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo sh -c '. /etc/lsb-release && echo "deb http://mirrors.ustc.edu.cn/ros/ubuntu/ $DISTRIB_CODENAME main" > /etc/apt/sources.list.d/ros-latest.list'
sudo sh -c '. /etc/lsb-release && echo "deb http://mirrors.tuna.tsinghua.edu.cn/ros/ubuntu/ $DISTRIB_CODENAME main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116
sudo apt-get update
sudo apt-get install ros-kinetic-desktop-full
sudo rosdep init
rosdep update
echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
source ~/.bashrc
sudo apt install python-rosinstall python-rosinstall-generator python-wstool build-essential


// Step 3: Install Intel RealSense ROS from Sources
// Create a catkin workspace
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src/
// Clone the latest Intel RealSense ROS into 'catkin_ws/src/'
git clone https://github.com/IntelRealSense/realsense-ros.git
cd realsense-ros/
git checkout `git tag | sort -V | grep -P "^\d+\.\d+\.\d+" | tail -1`
cd ..
catkin_init_workspace
cd ..
catkin_make clean
catkin_make -DCATKIN_ENABLE_TESTING=False -DCMAKE_BUILD_TYPE=Release
catkin_make install
echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc
source ~/.bashrc


// 檢驗是否能在ros使用realsense相機:
// 通過usb連接相機到電腦
sudo apt-get install ros-kinetic-rgbd-launch 
roslaunch realsense2_camera rs_rgbd.launch
// 查看一下相機發布的topic
rostopic list
// 查看相機內參信息的兩種方式
rostopic echo /camera/color/camera_info
rostopic echo /camera/aligned_depth_to_color/camera_info
// 再打開一個終端
rviz
// 此時並不能看到什麼結果,左上角 Displays 中 Fixed Frame 選項中,下拉菜單選擇 camera_link,這是主要到Global Status變成了綠色
// 點擊該框中的Add -> 上方點擊 By topic -> /depth_registered 下的 /points 下的/PointCloud2
// 點擊該框中的Add -> 上方點擊 By topic -> /color 下的 /image_raw 下的image

三、使用公開數據集配置測試ORB_SLAM2

可參考教程

// ***************************Prerequisites*****************************
// Pangolin, OpenCV, Eigen3 安裝見我的另一篇文章:https://blog.csdn.net/jiangchuanhu/article/details/89163864

// Building ORB-SLAM2 library and examples
git clone https://github.com/raulmur/ORB_SLAM2.git ORB_SLAM2     // Clone the repository
cd ORB_SLAM2 && chmod +x build.sh && ./build.sh

// 使用網上公開的數據集測試ORB_SLAM2

// 1、Monocular Examples --- TUM Dataset
// Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it.
// Change TUMX.yaml to TUM1.yaml,TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences respectively. Change PATH_TO_SEQUENCE_FOLDERto the uncompressed sequence folder.
./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUMX.yaml PATH_TO_SEQUENCE_FOLDER

// 3、RGB-D Example --- TUM Dataset
// Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it.
// We already provide associations for some of the sequences in Examples/RGB-D/associations/. You can generate your own associations file executing:
python associate.py PATH_TO_SEQUENCE/rgb.txt PATH_TO_SEQUENCE/depth.txt > associations.txt
// Execute the following command. Change TUMX.yaml to TUM1.yaml,TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences respectively. Change PATH_TO_SEQUENCE_FOLDERto the uncompressed sequence folder. Change ASSOCIATIONS_FILE to the path to the corresponding associations file.
./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUMX.yaml PATH_TO_SEQUENCE_FOLDER ASSOCIATIONS_FILE

四、在D435i設備上配置跑通ORB_SLAM2

可參考教程

// Add the path including Examples/ROS/ORB_SLAM2 to the ROS_PACKAGE_PATH environment variable. 
// Open .bashrc file and add at the end the following line. Replace PATH by the folder where you cloned ORB_SLAM2
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:PATH/ORB_SLAM2/Examples/ROS
// Execute build_ros.sh script
chmod +x build_ros.sh
./build_ros.sh

上述步驟中在執行./build_ros.sh出現關於boost庫的錯誤時,在/Examples/ROS/ORB-SLAM2/CMakeLists.txt文件下修改,加上-lboost_system,然後重新執行./build_ros.sh

set(LIBS 
${OpenCV_LIBS} 
${EIGEN3_LIBS}
${Pangolin_LIBRARIES}
${PROJECT_SOURCE_DIR}/../../../Thirdparty/DBoW2/lib/libDBoW2.so
${PROJECT_SOURCE_DIR}/../../../Thirdparty/g2o/lib/libg2o.so
${PROJECT_SOURCE_DIR}/../../../lib/libORB_SLAM2.so
-lboost_system              // 記得加在此處
)

按照以下步驟我們可獲取相機參數,通過usb連接相機到電腦,然後執行

roslaunch realsense2_camera rs_rgbd.launch
rostopic echo /camera/color/camera_info

它的數據結構形式如下:

---
header: 
  seq: 17
  stamp: 
    secs: 1560907148
    nsecs: 588988566
  frame_id: "camera_color_optical_frame"
height: 480
width: 640
distortion_model: "plumb_bob"
D: [0.0, 0.0, 0.0, 0.0, 0.0]
K: [615.9417724609375, 0.0, 322.3533630371094, 0.0, 616.0935668945312, 240.44674682617188, 0.0, 0.0, 1.0]
R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P: [615.9417724609375, 0.0, 322.3533630371094, 0.0, 0.0, 616.0935668945312, 240.44674682617188, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 0
binning_y: 0
roi: 
  x_offset: 0
  y_offset: 0
  height: 0
  width: 0
  do_rectify: False
---

根據網上看到的參考文章1參考文章2,我們根據D435i的相機參數寫出對應的D435i.yaml文件,相對於ORB-SLAM中的yaml文件,主要修改相機參數Camera Parameters部分。

%YAML:1.0

#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------

# Camera calibration and distortion parameters (OpenCV) 
Camera.fx: 615.9417724609375
Camera.fy: 616.0935668945312
Camera.cx: 322.3533630371094
Camera.cy: 240.44674682617188

Camera.k1: 0.0
Camera.k2: 0.0
Camera.p1: 0.0
Camera.p2: 0.0
Camera.p3: 0.0

Camera.width: 640
Camera.height: 480

# Camera frames per second 
Camera.fps: 30.0

# IR projector baseline times fx (aprox.)
# bf = baseline (in meters) * fx, D435i的 baseline = 50 mm 
Camera.bf: 30.797   

# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1

# Close/Far threshold. Baseline times.
ThDepth: 40.0

# Deptmap values factor
DepthMapFactor: 1000.0

#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------

# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1000

# ORB Extractor: Scale factor between levels in the scale pyramid 	
ORBextractor.scaleFactor: 1.2

# ORB Extractor: Number of levels in the scale pyramid	
ORBextractor.nLevels: 8

# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast			
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7

#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize:2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500

攝像頭節點發布的rgbd圖和depth圖話題名與ORB-SLAM2的訂閱RGB圖和depth圖話題名不同,在ORB-SLAM2/Examples/ROS/ORB-SLAM2/src中修改ros_rgbd.cc的topic訂閱

message_filters::Subscriber<sensor_msgs::Image> rgb_sub(nh, "/camera/color/image_raw", 1);
message_filters::Subscriber<sensor_msgs::Image> depth_sub(nh, "/camera/aligned_depth_to_color/image_raw", 1);

最後在ORB_SLAM2工作目錄下

// 重新編譯build_ros.sh
chmod +x build_ros.sh
./build_ros.sh
// 運行ORB_SLAM2
rosrun ORB_SLAM2 RGBD Vocabulary/ORBvoc.txt Examples/RGB-D/D435i.yaml
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章