VINS-Mono 代碼詳細解讀——迴環檢測與重定位、四自由度位姿圖優化

本文主要介紹VINS的閉環檢測重定位與位姿圖優化部分,作爲系列文章的最後一節。
迴環檢測的關鍵就是如何有效檢測出相機曾經經過同一個地方,這樣可以避免較大的累積誤差,使得當前幀和之前的某一幀迅速建立約束,形成新的較小的累積誤差。 由於迴環檢測提供了當前數據與所有歷史數據的關聯,在跟蹤算法丟失後,還可以利用重定位。
論文中主要分爲兩部分:迴環檢測與重定位、4-DOF的位姿圖優化。

第一部分主要是爲了通過迴環檢測找到當前幀和候選幀的聯繫,並通過簡單的緊耦合重定位將局部滑動窗口移動與過去的位姿對齊。
第二部分是爲了保證基於重定位結果對過去的所有位姿進行全局優化
論文中內容爲

0.4 重定位
vins的重定位模塊主要包含迴環檢測,迴環候選幀之間的特徵匹配,緊耦合重定位三個部分。

A、迴環檢測(只對關鍵幀)

1、採用DBoW2詞袋位置識別方法進行迴環檢測。經過時間空間一致性檢驗後,DBoW2返回迴環檢測候選幀。
2、除了用於單目VIO的角點特徵外,還添加了500個角點並使用BRIEF描述子,描述子用作視覺詞袋在數據庫裏進行搜索。這些額外的角點能用來實現更好的迴環檢測。
3、VINS只保留所有用於特徵檢索的BRIEF描述子,丟棄原始圖像以減小內存。
4、單目VIO可以觀測到滾動和俯仰角,VINS並不需要依賴旋轉不變性。

B、迴環候選幀之間的特徵匹配

1、檢測到迴環時,通過BRIEF描述子匹配找到對應關係。但是直接的描述子匹配會導致很多外點。 2、本文提出兩步幾何剔除法:
1)2D-2D: 使用RANSAC進行F矩陣測試,
2)3D-2D: 使用RANSAC進行PnP,基於已知的滑動窗特徵點的3D位置,和迴路閉合候選處圖像的2D觀測(像素座標)。 當內點超過一定閾值時,我們將該候選幀視爲正確的循環檢測並執行重定位。

C、緊耦合重定位

1、重定位過程使單目VIO維持的當前滑動窗口與過去的位姿圖對齊。
2、將所有迴環幀的位姿作爲常量利用所有IMU測量值、局部視覺測量和從迴環中提取特徵對應值,共同優化滑動窗口。

0.5 全局位姿圖優化
A、位姿圖中添加關鍵幀
B、4自由度位姿圖優化
C、位姿圖管理

在這裏插入圖片描述
相關代碼都在pose_graph文件中,主要分爲三個程序,分別爲:
keyframe.cpp/.h 構建關鍵幀類、描述子計算、匹配關鍵幀與迴環幀。
pose_graph.cpp/.h 位姿圖的建立與圖優化、迴環檢測與閉環。
pose_graph_node.cpp ROS 節點函數、回調函數、主線程。
————————————————
1、pose_graph_node.cpp
主要分爲7個回調函數以及主線程process()函數。
7個回調函數:包括關鍵幀的位姿(keyframe_pose)、重定位位姿(relo_relative_pose)、相機到IMU的外參估計(extrinsic)、VIO里程計信息PQV(odometry)、關鍵幀中的3D點雲(keyframe_point)、IMU傳播值(imu_propagate)。
————————————————
在這裏插入圖片描述

1、ROS初始化,設置句柄
2、從launch文件讀取參數和參數文件config中的參數
3、如果需要進行迴環檢測則讀取詞典和BRIEF描述子的模板文件,同時讀取
4、config中的其他參數、設置帶回環的結果輸出路徑。
5、加載先前位姿圖 loadPoseGraph()
6、訂閱各個topic並執行各自回調函數
7、發佈/pose_graph的topic
8、設置主線程 process() 和鍵盤控制線程 command() ————————————————

int main(int argc, char **argv)
{
    // 1.ROS初始化,設置句柄
    ros::init(argc, argv, "pose_graph");
    ros::NodeHandle n("~");
    posegraph.registerPub(n);
 
    // 2.讀取參數
    n.getParam("visualization_shift_x", VISUALIZATION_SHIFT_X);
    n.getParam("visualization_shift_y", VISUALIZATION_SHIFT_Y);
    n.getParam("skip_cnt", SKIP_CNT);
    n.getParam("skip_dis", SKIP_DIS);
    std::string config_file;
    n.getParam("config_file", config_file);
    cv::FileStorage fsSettings(config_file, cv::FileStorage::READ);
    if(!fsSettings.isOpened())
    {
        std::cerr << "ERROR: Wrong path to settings" << std::endl;
    }
    double camera_visual_size = fsSettings["visualize_camera_size"];
    cameraposevisual.setScale(camera_visual_size);
    cameraposevisual.setLineWidth(camera_visual_size / 10.0);
    LOOP_CLOSURE = fsSettings["loop_closure"];
    std::string IMAGE_TOPIC;
    int LOAD_PREVIOUS_POSE_GRAPH;
 
    //3.如果需要進行迴環檢測則讀取詞典和BRIEF描述子的模板文件,同時讀取config中的其他參數、設置帶回環的結果輸出路徑。
    if (LOOP_CLOSURE)
    {
        ROW = fsSettings["image_height"];
        COL = fsSettings["image_width"];
 
        // 3.1讀取字典
        std::string pkg_path = ros::package::getPath("pose_graph");
        string vocabulary_file = pkg_path + "/../support_files/brief_k10L6.bin";
        cout << "vocabulary_file" << vocabulary_file << endl;
        posegraph.loadVocabulary(vocabulary_file);
 
        // 3.2讀取BRIEF描述子的模板文件
        BRIEF_PATTERN_FILE = pkg_path + "/../support_files/brief_pattern.yml";
        cout << "BRIEF_PATTERN_FILE" << BRIEF_PATTERN_FILE << endl;
        m_camera = camodocal::CameraFactory::instance()->generateCameraFromYamlFile(config_file.c_str());
 
        fsSettings["image_topic"] >> IMAGE_TOPIC;        
        fsSettings["pose_graph_save_path"] >> POSE_GRAPH_SAVE_PATH;
        fsSettings["output_path"] >> VINS_RESULT_PATH;
        fsSettings["save_image"] >> DEBUG_IMAGE;
        VISUALIZE_IMU_FORWARD = fsSettings["visualize_imu_forward"];
        LOAD_PREVIOUS_POSE_GRAPH = fsSettings["load_previous_pose_graph"];
        FAST_RELOCALIZATION = fsSettings["fast_relocalization"];
        VINS_RESULT_PATH = VINS_RESULT_PATH + "/vins_result_loop.csv";
        std::ofstream fout(VINS_RESULT_PATH, std::ios::out);
        fout.close();
        fsSettings.release();
 
        // 3.3加載先前的位姿圖
        if (LOAD_PREVIOUS_POSE_GRAPH)
        {
            printf("load pose graph\n");
            m_process.lock();
            posegraph.loadPoseGraph();
            m_process.unlock();
            printf("load pose graph finish\n");
            load_flag = 1;
        }
        else
        {
            printf("no previous pose graph\n");
            load_flag = 1;
        }
    }
 
    fsSettings.release();
 
    // 4.訂閱話題topic並執行各自回調函數
    ros::Subscriber sub_imu_forward = n.subscribe("/vins_estimator/imu_propagate", 2000, imu_forward_callback);
    ros::Subscriber sub_vio = n.subscribe("/vins_estimator/odometry", 2000, vio_callback);
    ros::Subscriber sub_image = n.subscribe(IMAGE_TOPIC, 2000, image_callback);
    ros::Subscriber sub_pose = n.subscribe("/vins_estimator/keyframe_pose", 2000, pose_callback);
    ros::Subscriber sub_extrinsic = n.subscribe("/vins_estimator/extrinsic", 2000, extrinsic_callback);
    ros::Subscriber sub_point = n.subscribe("/vins_estimator/keyframe_point", 2000, point_callback);
    ros::Subscriber sub_relo_relative_pose = n.subscribe("/vins_estimator/relo_relative_pose", 2000, relo_relative_pose_callback);
 
    // 5.發佈的話題topic
    pub_match_img = n.advertise<sensor_msgs::Image>("match_image", 1000);
    pub_camera_pose_visual = n.advertise<visualization_msgs::MarkerArray>("camera_pose_visual", 1000);
    pub_key_odometrys = n.advertise<visualization_msgs::Marker>("key_odometrys", 1000);
    pub_vio_path = n.advertise<nav_msgs::Path>("no_loop_path", 1000);
    pub_match_points = n.advertise<sensor_msgs::PointCloud>("match_points", 100);
 
    // 6. 創建兩個線程,process 和 command
    std::thread measurement_process;
    std::thread keyboard_command_process;
    //pose graph主線程
    measurement_process = std::thread(process);
    //鍵盤操作的線程
    keyboard_command_process = std::thread(command);
 
    ros::spin();
 
    return 0;
}

2、process()函數
1、先判斷是否需要回環檢測
2、得到具有相同時間戳的pose_msg、image_msg、point_msg
3、構建pose_graph中用到的關鍵幀,然後每隔SKIP_CNT,將將距上一關鍵幀距離(平移向量的模)超過SKIP_DIS的圖像創建爲關鍵幀。其中,最核心的函數是KeyFrame類以及addKeyFrame()函數。
————————————————

//主線程
void process()
{
    // 1.先判斷是否需要回環檢測
    if (!LOOP_CLOSURE)
        return;
    
    while (true)// 不斷循環
    {
        // 三個參數圖像、點雲、VIO位姿
        sensor_msgs::ImageConstPtr image_msg = NULL;
        sensor_msgs::PointCloudConstPtr point_msg = NULL;
        nav_msgs::Odometry::ConstPtr pose_msg = NULL;
 
        // find out the messages with same time stamp
        // 2.得到具有相同時間戳的pose_msg、image_msg、point_msg
        m_buf.lock();
        if(!image_buf.empty() && !point_buf.empty() && !pose_buf.empty())// 首先三個都不爲空
        {
            if (image_buf.front()->header.stamp.toSec() > pose_buf.front()->header.stamp.toSec())
            {
                pose_buf.pop();
                printf("throw pose at beginning\n");
            }
            else if (image_buf.front()->header.stamp.toSec() > point_buf.front()->header.stamp.toSec())
            {
                point_buf.pop();
                printf("throw point at beginning\n");
            }
            else if (image_buf.back()->header.stamp.toSec() >= pose_buf.front()->header.stamp.toSec() 
                && point_buf.back()->header.stamp.toSec() >= pose_buf.front()->header.stamp.toSec())
            {
                pose_msg = pose_buf.front();
                pose_buf.pop();
                while (!pose_buf.empty())
                    pose_buf.pop();
 
                while (image_buf.front()->header.stamp.toSec() < pose_msg->header.stamp.toSec())
                    image_buf.pop();
                image_msg = image_buf.front();
                image_buf.pop();
 
                while (point_buf.front()->header.stamp.toSec() < pose_msg->header.stamp.toSec())
                    point_buf.pop();
                point_msg = point_buf.front();
                point_buf.pop();
            }
        }
        m_buf.unlock();
 
        // 3.構建pose_graph中用到的關鍵幀,然後每隔SKIP_CNT,將將距上一關鍵幀距離(平移向量的模)超過SKIP_DIS的圖像創建爲關鍵幀。
        if (pose_msg != NULL)
        {
            //printf(" pose time %f \n", pose_msg->header.stamp.toSec());
            //printf(" point time %f \n", point_msg->header.stamp.toSec());
            //printf(" image time %f \n", image_msg->header.stamp.toSec());
            
            // skip first few
            // 3.1 剔除最開始的SKIP_FIRST_CNT幀,
            if (skip_first_cnt < SKIP_FIRST_CNT)
            {
                skip_first_cnt++;
                continue;
            }
 
            // 3.2每隔SKIP_CNT幀進行一次  SKIP_CNT=0
            if (skip_cnt < SKIP_CNT)
            {
                skip_cnt++;
                continue;
            }
            else
            {
                skip_cnt = 0;
            }
            // 3.3ROS圖像轉爲Opencv類型
            cv_bridge::CvImageConstPtr ptr;
            if (image_msg->encoding == "8UC1")
            {
                sensor_msgs::Image img;
                img.header = image_msg->header;
                img.height = image_msg->height;
                img.width = image_msg->width;
                img.is_bigendian = image_msg->is_bigendian;
                img.step = image_msg->step;
                img.data = image_msg->data;
                img.encoding = "mono8";
                ptr = cv_bridge::toCvCopy(img, sensor_msgs::image_encodings::MONO8);
            }
            else
                ptr = cv_bridge::toCvCopy(image_msg, sensor_msgs::image_encodings::MONO8);
            
            cv::Mat image = ptr->image;
  
            // build keyframe
            Vector3d T = Vector3d(pose_msg->pose.pose.position.x,
                                  pose_msg->pose.pose.position.y,
                                  pose_msg->pose.pose.position.z);
            Matrix3d R = Quaterniond(pose_msg->pose.pose.orientation.w,
                                     pose_msg->pose.pose.orientation.x,
                                     pose_msg->pose.pose.orientation.y,
                                     pose_msg->pose.pose.orientation.z).toRotationMatrix();
            
            // 3.4 將距上一關鍵幀距離(平移向量的模)超過SKIP_DIS的圖像創建爲關鍵幀
            if((T - last_t).norm() > SKIP_DIS)
            {
                vector<cv::Point3f> point_3d; 
                vector<cv::Point2f> point_2d_uv; 
                vector<cv::Point2f> point_2d_normal;
                vector<double> point_id;
 
                for (unsigned int i = 0; i < point_msg->points.size(); i++)
                {
                    cv::Point3f p_3d;
                    p_3d.x = point_msg->points[i].x;
                    p_3d.y = point_msg->points[i].y;
                    p_3d.z = point_msg->points[i].z;
                    point_3d.push_back(p_3d);
 
                    cv::Point2f p_2d_uv, p_2d_normal;
                    double p_id;
                    p_2d_normal.x = point_msg->channels[i].values[0];
                    p_2d_normal.y = point_msg->channels[i].values[1];
                    p_2d_uv.x = point_msg->channels[i].values[2];
                    p_2d_uv.y = point_msg->channels[i].values[3];
                    p_id = point_msg->channels[i].values[4];
                    point_2d_normal.push_back(p_2d_normal);
                    point_2d_uv.push_back(p_2d_uv);
                    point_id.push_back(p_id);
 
                    //printf("u %f, v %f \n", p_2d_uv.x, p_2d_uv.y);
                }
                // 3.5 創建爲關鍵幀類型並加入到posegraph中
                KeyFrame* keyframe = new KeyFrame(pose_msg->header.stamp.toSec(), frame_index, T, R, image,
                                   point_3d, point_2d_uv, point_2d_normal, point_id, sequence);   
                m_process.lock();
 
                start_flag = 1;
                //在posegraph中添加關鍵幀,flag_detect_loop=1迴環檢測
                posegraph.addKeyFrame(keyframe, 1);
                m_process.unlock();
                frame_index++;
                last_t = T;
            }
        }
 
        // 4.休眠5ms
        std::chrono::milliseconds dura(5);
        std::this_thread::sleep_for(dura);
    }
}

3、pose_graph.h
接下來將主要講解兩個函數:addKeyFrame()以及detectLoop()
在這裏插入圖片描述
4、addKeyFrame()添加關鍵幀,進行迴環檢測與閉環

1、建一個新的圖像序列
2、getVioPose()獲取當前幀的位姿vio_P_cur、vio_R_cur並更新 updateVioPose
3、detectLoop()迴環檢測,返回迴環候選幀的索引loop_index
4、計算當前幀與迴環幀的相對位姿,糾正當前幀位姿w_P_cur、w_R_cur;迴環得到的位姿和VIO位姿之間的偏移量shift_r、shift_t;如果存在多個圖像序列,則將所有圖像序列都合併到世界座標系下
5、獲取VIO當前幀的位姿P、R,根據偏移量得到實際位姿並更新 updatePose當前幀的位姿P、R
6、發佈path[sequence_cnt]
7、保存閉環軌跡到VINS_RESULT_PATH ————————————————

void PoseGraph::addKeyFrame(KeyFrame* cur_kf, bool flag_detect_loop)
{
    //shift to base frame
    Vector3d vio_P_cur;
    Matrix3d vio_R_cur;
 
    // 1.建一個新的圖像序列
    if (sequence_cnt != cur_kf->sequence)
    {
        sequence_cnt++;
        sequence_loop.push_back(0);
        w_t_vio = Eigen::Vector3d(0, 0, 0);
        w_r_vio = Eigen::Matrix3d::Identity();
        m_drift.lock();
        t_drift = Eigen::Vector3d(0, 0, 0);
        r_drift = Eigen::Matrix3d::Identity();
        m_drift.unlock();
    }
    
    // 2.getVioPose()獲取當前幀的位姿vio_P_cur、vio_R_cur並更新updateVioPose
    cur_kf->getVioPose(vio_P_cur, vio_R_cur);
    vio_P_cur = w_r_vio * vio_P_cur + w_t_vio;
    vio_R_cur = w_r_vio * vio_R_cur;
 
    cur_kf->updateVioPose(vio_P_cur, vio_R_cur);
    cur_kf->index = global_index;
    global_index++;
	int loop_index = -1;
 
    if (flag_detect_loop)
    {
        TicToc tmp_t;
        // 3.detectLoop()迴環檢測,返回迴環候選幀的索引loop_index
        loop_index = detectLoop(cur_kf, cur_kf->index);
    }
    else
    {
        addKeyFrameIntoVoc(cur_kf);
    }
    // 4.存在迴環候選幀
	if (loop_index != -1)
	{
        //printf(" %d detect loop with %d \n", cur_kf->index, loop_index);
        
        // 4.1 獲取迴環候選幀 
        KeyFrame* old_kf = getKeyFrame(loop_index);
 
        // 當前幀與迴環候選幀進行描述子匹配,如果成功則確定存在迴環
        if (cur_kf->findConnection(old_kf))
        {
            //earliest_loop_index爲最早的迴環候選幀
            if (earliest_loop_index > loop_index || earliest_loop_index == -1)
                earliest_loop_index = loop_index;
            // 4.2 計算當前幀與迴環幀的相對位姿,糾正當前幀位姿w_P_cur、w_R_cur
            Vector3d w_P_old, w_P_cur, vio_P_cur;
            Matrix3d w_R_old, w_R_cur, vio_R_cur;
            old_kf->getVioPose(w_P_old, w_R_old);
            cur_kf->getVioPose(vio_P_cur, vio_R_cur);
 
            //獲取當前幀與迴環幀的相對位姿relative_q、relative_t
            Vector3d relative_t;
            Quaterniond relative_q;
            relative_t = cur_kf->getLoopRelativeT();
            relative_q = (cur_kf->getLoopRelativeQ()).toRotationMatrix();
 
            //重新計算當前幀位姿w_P_cur、w_R_cur
            w_P_cur = w_R_old * relative_t + w_P_old;
            w_R_cur = w_R_old * relative_q;
            
            //迴環得到的位姿和VIO位姿之間的偏移量shift_r、shift_t
            double shift_yaw;
            Matrix3d shift_r;
            Vector3d shift_t; 
            shift_yaw = Utility::R2ypr(w_R_cur).x() - Utility::R2ypr(vio_R_cur).x();
            shift_r = Utility::ypr2R(Vector3d(shift_yaw, 0, 0));
            shift_t = w_P_cur - w_R_cur * vio_R_cur.transpose() * vio_P_cur; 
            
            // shift vio pose of whole sequence to the world frame
            // 4.3如果存在多個圖像序列,則將所有圖像序列都合併到世界座標系下
            if (old_kf->sequence != cur_kf->sequence && sequence_loop[cur_kf->sequence] == 0)
            {  
                w_r_vio = shift_r;
                w_t_vio = shift_t;
                vio_P_cur = w_r_vio * vio_P_cur + w_t_vio;
                vio_R_cur = w_r_vio *  vio_R_cur;
                cur_kf->updateVioPose(vio_P_cur, vio_R_cur);
                list<KeyFrame*>::iterator it = keyframelist.begin();
                for (; it != keyframelist.end(); it++)   
                {
                    if((*it)->sequence == cur_kf->sequence)
                    {
                        Vector3d vio_P_cur;
                        Matrix3d vio_R_cur;
                        (*it)->getVioPose(vio_P_cur, vio_R_cur);
                        vio_P_cur = w_r_vio * vio_P_cur + w_t_vio;
                        vio_R_cur = w_r_vio *  vio_R_cur;
                        (*it)->updateVioPose(vio_P_cur, vio_R_cur);
                    }
                }
                sequence_loop[cur_kf->sequence] = 1;
            }
 
            // 4.4 將當前幀放入優化隊列中
            m_optimize_buf.lock();
            optimize_buf.push(cur_kf->index);
            m_optimize_buf.unlock();
        }
	}
 
    
	m_keyframelist.lock();
    Vector3d P;
    Matrix3d R;
 
    // 5.獲取VIO當前幀的位姿P、R,根據偏移量得到實際位姿
    cur_kf->getVioPose(P, R);
    P = r_drift * P + t_drift;
    R = r_drift * R;
    // 更新當前幀的位姿P、R
    cur_kf->updatePose(P, R);
 
    // 6.發佈path[sequence_cnt]
    Quaterniond Q{R};
    geometry_msgs::PoseStamped pose_stamped;
    pose_stamped.header.stamp = ros::Time(cur_kf->time_stamp);
    pose_stamped.header.frame_id = "world";
    pose_stamped.pose.position.x = P.x() + VISUALIZATION_SHIFT_X;
    pose_stamped.pose.position.y = P.y() + VISUALIZATION_SHIFT_Y;
    pose_stamped.pose.position.z = P.z();
    pose_stamped.pose.orientation.x = Q.x();
    pose_stamped.pose.orientation.y = Q.y();
    pose_stamped.pose.orientation.z = Q.z();
    pose_stamped.pose.orientation.w = Q.w();
    path[sequence_cnt].poses.push_back(pose_stamped);
    path[sequence_cnt].header = pose_stamped.header;
 
    // 7.保存閉環軌跡到VINS_RESULT_PATH
    if (SAVE_LOOP_PATH)
    {
        ofstream loop_path_file(VINS_RESULT_PATH, ios::app);
        loop_path_file.setf(ios::fixed, ios::floatfield);
        loop_path_file.precision(0);
        loop_path_file << cur_kf->time_stamp * 1e9 << ",";
        loop_path_file.precision(5);
        loop_path_file  << P.x() << ","
              << P.y() << ","
              << P.z() << ","
              << Q.w() << ","
              << Q.x() << ","
              << Q.y() << ","
              << Q.z() << ","
              << endl;
        loop_path_file.close();
    }
 
    // 8.draw local connection
    if (SHOW_S_EDGE)
    {
        list<KeyFrame*>::reverse_iterator rit = keyframelist.rbegin();
        for (int i = 0; i < 4; i++)
        {
            if (rit == keyframelist.rend())
                break;
            Vector3d conncected_P;
            Matrix3d connected_R;
            if((*rit)->sequence == cur_kf->sequence)
            {
                (*rit)->getPose(conncected_P, connected_R);
                posegraph_visualization->add_edge(P, conncected_P);
            }
            rit++;
        }
    }
    //當前幀與其迴環幀連線
    if (SHOW_L_EDGE)
    {
        if (cur_kf->has_loop)
        {
            //printf("has loop \n");
            KeyFrame* connected_KF = getKeyFrame(cur_kf->loop_index);
            Vector3d connected_P,P0;
            Matrix3d connected_R,R0;
            connected_KF->getPose(connected_P, connected_R);
            //cur_kf->getVioPose(P0, R0);
            cur_kf->getPose(P0, R0);
            if(cur_kf->sequence > 0)
            {
                //printf("add loop into visual \n");
                posegraph_visualization->add_loopedge(P0, connected_P + Vector3d(VISUALIZATION_SHIFT_X, VISUALIZATION_SHIFT_Y, 0));
            }
            
        }
    }
    //posegraph_visualization->add_pose(P + Vector3d(VISUALIZATION_SHIFT_X, VISUALIZATION_SHIFT_Y, 0), Q);
 
	keyframelist.push_back(cur_kf);
    publish();
	m_keyframelist.unlock();
}

5、detectLoop()迴環檢測返回候選幀索引

1、query()查詢字典數據庫,得到與每一幀的相似度評分ret 2、add()添加當前關鍵幀到字典數據庫中
3、hconcat()通過相似度評分判斷是否存在迴環候選幀
4、對於索引值大於50的關鍵幀才考慮進行迴環,即前50幀不迴環,返回評分大於0.015的最早的關鍵幀索引min_index
————————————————

//迴環檢測返回候選幀索引
int PoseGraph::detectLoop(KeyFrame* keyframe, int frame_index)
{
    // put image into image_pool; for visualization
    cv::Mat compressed_image;
    if (DEBUG_IMAGE)
    {
        int feature_num = keyframe->keypoints.size();
        cv::resize(keyframe->image, compressed_image, cv::Size(376, 240));
        putText(compressed_image, "feature_num:" + to_string(feature_num), cv::Point2f(10, 10), CV_FONT_HERSHEY_SIMPLEX, 0.4, cv::Scalar(255));
        image_pool[frame_index] = compressed_image;
    }
    TicToc tmp_t;
 
    QueryResults ret;
    TicToc t_query;
 
    // 1.查詢字典數據庫,得到與每一幀的相似度評分ret
    db.query(keyframe->brief_descriptors, ret, 4, frame_index - 50);
    //printf("query time: %f", t_query.toc());
    //cout << "Searching for Image " << frame_index << ". " << ret << endl;
 
    // 2.添加當前關鍵幀到字典數據庫中
    TicToc t_add;
    db.add(keyframe->brief_descriptors);
    //printf("add feature time: %f", t_add.toc());
    
    // ret[0] is the nearest neighbour's score. threshold change with neighour score
   // 3.通過相似度評分判斷是否存在迴環候選幀
    bool find_loop = false;
    cv::Mat loop_result;
 
    if (DEBUG_IMAGE)
    {
        loop_result = compressed_image.clone();
        if (ret.size() > 0)
            putText(loop_result, "neighbour score:" + to_string(ret[0].Score), cv::Point2f(10, 50), CV_FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(255));
    }
 
    // visual loop result 
    if (DEBUG_IMAGE)
    {
        for (unsigned int i = 0; i < ret.size(); i++)
        {
            int tmp_index = ret[i].Id;
            auto it = image_pool.find(tmp_index);
            cv::Mat tmp_image = (it->second).clone();
            putText(tmp_image, "index:  " + to_string(tmp_index) + "loop score:" + to_string(ret[i].Score), cv::Point2f(10, 50), CV_FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(255));
            cv::hconcat(loop_result, tmp_image, loop_result);
        }
    }
 
    //確保與相鄰幀具有好的相似度評分
    if (ret.size() >= 1 &&ret[0].Score > 0.05)
        for (unsigned int i = 1; i < ret.size(); i++)
        {
            //if (ret[i].Score > ret[0].Score * 0.3)
            //評分大於0.015則認爲是迴環候選幀
            if (ret[i].Score > 0.015)
            {          
                find_loop = true;
                int tmp_index = ret[i].Id;
                if (DEBUG_IMAGE && 0)
                {
                    auto it = image_pool.find(tmp_index);
                    cv::Mat tmp_image = (it->second).clone();
                    putText(tmp_image, "loop score:" + to_string(ret[i].Score), cv::Point2f(10, 50), CV_FONT_HERSHEY_SIMPLEX, 0.4, cv::Scalar(255));
                    cv::hconcat(loop_result, tmp_image, loop_result);
                }
            }
 
        }
/*
    if (DEBUG_IMAGE)
    {
        cv::imshow("loop_result", loop_result);
        cv::waitKey(20);
    }
*/
    // 4.對於索引值大於50的關鍵幀才考慮進行迴環,即前50幀不迴環
    //返回評分大於0.015的最早的關鍵幀索引min_index
    if (find_loop && frame_index > 50)
    {
        int min_index = -1;
        for (unsigned int i = 0; i < ret.size(); i++)
        {
            if (min_index == -1 || (ret[i].Id < min_index && ret[i].Score > 0.015))
                min_index = ret[i].Id;
        }
        return min_index;
    }
    else
        return -1;
 
}

6、optimize4DoF()位姿圖優化

//四自由度位姿圖優化
void PoseGraph::optimize4DoF()
{
    while(true)
    {
        int cur_index = -1;
        int first_looped_index = -1;
 
        //取出最新一個待優化幀作爲當前幀
        m_optimize_buf.lock();
        while(!optimize_buf.empty())
        {
            cur_index = optimize_buf.front();
            first_looped_index = earliest_loop_index;
            optimize_buf.pop();
        }
        m_optimize_buf.unlock();
 
        if (cur_index != -1)
        {
            printf("optimize pose graph \n");
            TicToc tmp_t;
 
            m_keyframelist.lock();
            KeyFrame* cur_kf = getKeyFrame(cur_index);
 
            int max_length = cur_index + 1;
 
            // w^t_i   w^q_i
            double t_array[max_length][3];
            Quaterniond q_array[max_length];
            double euler_array[max_length][3];
            double sequence_array[max_length];
 
            ceres::Problem problem;
            ceres::Solver::Options options;
            options.linear_solver_type = ceres::SPARSE_NORMAL_CHOLESKY;
            //options.minimizer_progress_to_stdout = true;
            //options.max_solver_time_in_seconds = SOLVER_TIME * 3;
            options.max_num_iterations = 5;
            ceres::Solver::Summary summary;
            ceres::LossFunction *loss_function;
            loss_function = new ceres::HuberLoss(0.1);
            //loss_function = new ceres::CauchyLoss(1.0);
            ceres::LocalParameterization* angle_local_parameterization =
                AngleLocalParameterization::Create();
 
            list<KeyFrame*>::iterator it;
 
            int i = 0;
            for (it = keyframelist.begin(); it != keyframelist.end(); it++)
            {
                //找到第一個迴環候選幀,
                //迴環檢測到幀以前的都略過
                if ((*it)->index < first_looped_index)
                    continue;
                (*it)->local_index = i;
                Quaterniond tmp_q;
                Matrix3d tmp_r;
                Vector3d tmp_t;
                (*it)->getVioPose(tmp_t, tmp_r);
                tmp_q = tmp_r;
                t_array[i][0] = tmp_t(0);
                t_array[i][1] = tmp_t(1);
                t_array[i][2] = tmp_t(2);
                q_array[i] = tmp_q;
 
                Vector3d euler_angle = Utility::R2ypr(tmp_q.toRotationMatrix());
                euler_array[i][0] = euler_angle.x();
                euler_array[i][1] = euler_angle.y();
                euler_array[i][2] = euler_angle.z();
 
                sequence_array[i] = (*it)->sequence;
 
                problem.AddParameterBlock(euler_array[i], 1, angle_local_parameterization);
                problem.AddParameterBlock(t_array[i], 3);
 
                //迴環檢測到的幀參數設爲固定
                if ((*it)->index == first_looped_index || (*it)->sequence == 0)
                {   
                    problem.SetParameterBlockConstant(euler_array[i]);
                    problem.SetParameterBlockConstant(t_array[i]);
                }
 
                //add edge
                //對於每個i, 只計算它之前五個的位置和yaw殘差
                for (int j = 1; j < 5; j++)
                {
                  if (i - j >= 0 && sequence_array[i] == sequence_array[i-j])
                  {
                    Vector3d euler_conncected = Utility::R2ypr(q_array[i-j].toRotationMatrix());
                    Vector3d relative_t(t_array[i][0] - t_array[i-j][0], t_array[i][1] - t_array[i-j][1], t_array[i][2] - t_array[i-j][2]);
                    relative_t = q_array[i-j].inverse() * relative_t;
                    double relative_yaw = euler_array[i][0] - euler_array[i-j][0];
                    ceres::CostFunction* cost_function = FourDOFError::Create( relative_t.x(), relative_t.y(), relative_t.z(),
                                                   relative_yaw, euler_conncected.y(), euler_conncected.z());
                    problem.AddResidualBlock(cost_function, NULL, euler_array[i-j], 
                                            t_array[i-j], 
                                            euler_array[i], 
                                            t_array[i]);
                  }
                }
 
                //add loop edge
                if((*it)->has_loop)// 如果檢測到迴環
                {
                     //必須迴環檢測的幀號大於或者等於當前幀的迴環檢測匹配幀號
                    assert((*it)->loop_index >= first_looped_index);
                    int connected_index = getKeyFrame((*it)->loop_index)->local_index;
                    Vector3d euler_conncected = Utility::R2ypr(q_array[connected_index].toRotationMatrix());
                    Vector3d relative_t;
                    relative_t = (*it)->getLoopRelativeT();
                    double relative_yaw = (*it)->getLoopRelativeYaw();
                    ceres::CostFunction* cost_function = FourDOFWeightError::Create( relative_t.x(), relative_t.y(), relative_t.z(),
                                                                               relative_yaw, euler_conncected.y(), euler_conncected.z());
                    problem.AddResidualBlock(cost_function, loss_function, euler_array[connected_index], 
                                                                  t_array[connected_index], 
                                                                  euler_array[i], 
                                                                  t_array[i]);
                    
                }
                
                if ((*it)->index == cur_index)
                    break;
                i++;
            }
            m_keyframelist.unlock();
 
            ceres::Solve(options, &problem, &summary);
            //std::cout << summary.BriefReport() << "\n";
            
            //printf("pose optimization time: %f \n", tmp_t.toc());
            /*
            for (int j = 0 ; j < i; j++)
            {
                printf("optimize i: %d p: %f, %f, %f\n", j, t_array[j][0], t_array[j][1], t_array[j][2] );
            }
            */
 
            m_keyframelist.lock();
            i = 0;
            //根據優化後的參數更新參與優化的關鍵幀的位姿
            for (it = keyframelist.begin(); it != keyframelist.end(); it++)
            {
                if ((*it)->index < first_looped_index)
                    continue;
                Quaterniond tmp_q;
                tmp_q = Utility::ypr2R(Vector3d(euler_array[i][0], euler_array[i][1], euler_array[i][2]));
                Vector3d tmp_t = Vector3d(t_array[i][0], t_array[i][1], t_array[i][2]);
                Matrix3d tmp_r = tmp_q.toRotationMatrix();
                (*it)-> updatePose(tmp_t, tmp_r);
 
                if ((*it)->index == cur_index)
                    break;
                i++;
            }
            //根據當前幀的drift,更新全部關鍵幀位姿
            Vector3d cur_t, vio_t;
            Matrix3d cur_r, vio_r;
            cur_kf->getPose(cur_t, cur_r);
            cur_kf->getVioPose(vio_t, vio_r);
            m_drift.lock();
            yaw_drift = Utility::R2ypr(cur_r).x() - Utility::R2ypr(vio_r).x();
            r_drift = Utility::ypr2R(Vector3d(yaw_drift, 0, 0));
            t_drift = cur_t - r_drift * vio_t;
            m_drift.unlock();
            //cout << "t_drift " << t_drift.transpose() << endl;
            //cout << "r_drift " << Utility::R2ypr(r_drift).transpose() << endl;
            //cout << "yaw drift " << yaw_drift << endl;
 
            it++;
            for (; it != keyframelist.end(); it++)
            {
                Vector3d P;
                Matrix3d R;
                (*it)->getVioPose(P, R);
                P = r_drift * P + t_drift;
                R = r_drift * R;
                (*it)->updatePose(P, R);
            }
            m_keyframelist.unlock();
            updatePath();
        }
 
        std::chrono::milliseconds dura(2000);
        std::this_thread::sleep_for(dura);
    }
}

7、keyframe.h
主要有兩個類:BriefExtractor和KeyFrame
第一個類BriefExtractor:通過Brief模板文件,對圖像的關鍵點計算Brief描述子。

class BriefExtractor
{
public:
  virtual void operator()(const cv::Mat &im, vector<cv::KeyPoint> &keys, vector<BRIEF::bitset> &descriptors) const;
  BriefExtractor(const std::string &pattern_file);
 
  DVision::BRIEF m_brief;
};

第二個類KeyFrame:構建關鍵幀,通過BRIEF描述子匹配關鍵幀和迴環候選幀。
在這裏插入圖片描述
代碼流程圖的出處:VINS-Mono代碼解讀——迴環檢測與重定位 pose graph loop closing by Manii

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章