VINS-FUSION源碼框架及C++知識點總結

  VINS-FUSION是港科大空中機器人實驗室的開源視覺慣性導航SLAM,在此稱爲slam,是因爲不同於VIO,它具有迴環和地圖複用功能,是一個完整的基於優化算法的slam系統,有關該算法介紹及其中的數學理論部分見之後的鏈接,在此不再講解,而是專注於其代碼實現的過程.
VINS-FUSION github
VINS中的camera與IMU座標系
VINS-Mono論文學習與代碼解讀,一位大牛的博客,數學推導,代碼註釋都有。
VINS-mono詳細解讀,介紹了算法大概流程以及公式推導
幾位大佬的註釋,我沒看過,建議多多對比參考:
VINS-Mono-Learning
VINS-Mono中文註釋
VINS-Mono-code-annotation
本人註釋的代碼,只有VIO部分:
VINS-FUSION-Annotation

VINS-FUSION程序架構

  與主流slam算法一樣,分爲前端和後端,前端提取視覺信息(單目或者雙目視覺特徵點)並進行跟蹤,後端處理前端得到的特徵點跟蹤信息,融合IMU,GPS等外部信息,利用優化算法實時得出當前的狀態.運行時可以選擇是否使用多線程,在此僅以多線程來講解.運行時有各種傳感器數據訂閱者的回調函數,前端視覺處理函數sync_process() 以及後端優化部分 processMeasurements(),下面分別就這兩部分函數源碼進行講解.

前端

  前端任務就是提取並跟蹤視覺信息,不同於MSCKF前端流程在一個函數裏就能看明白,vins-fusion的前端需要經過解析層層嵌套的函數才能看出明顯的流程,前端封裝在函數**sync_process()**裏.
   首先訂閱者sub_img接收到圖像消息後,調用回調函數將其存儲在緩存區img_buf裏,並不會在回調函數裏進行圖像處理.前端流程如下:

Created with Raphaël 2.2.0 開始雙目?雙目綁定跟蹤(以下是本函數的內部函數) featureTracker.trackImage上一幀有特徵點嗎?LK跟蹤上一幀與校驗 cv::calcOpticalFlowPyrLK()提取新的特徵點 cv::goodFeaturesToTrack()是雙目嗎? 雙目匹配 cv::calcOpticalFlowPyrLK()特徵封裝featureFrame[feature_id].emplace_back()結束提取單目圖像yesnoyesnoyesno
輸入:圖像
前端front-end
輸出:封裝好的特徵點集
featureFrame

  前端輸出是map<int, vector<pair<int, Eigen::Matrix<double, 7, 1>>>> featureFrame對象, 第一個int是特徵點的id,第二個in是camera_id(1或者2,代表是在哪個相機上),matrix是該特徵點在該幀圖像裏的信息(像素座標,歸一化座標,速度).

前端比較簡單,有比較更易懂,在此將其與msckf-vio的異同點羅列如下:

  1. 都是特徵點法,vins提取的Harris角點(但opencv裏也說該提取函數也也支持Shi Tomasi算法的角點),而msckf-vio提取的是fast角點.
  2. 都採用了opencv自帶的LK跟蹤算法來進行前後幀及雙目特徵點的匹配.
  3. 外點排除方法不同:msckf-vio基於幾何檢驗,即用RANSAC+F矩陣檢驗,而vins用了forward and backward(即改變跟蹤與被跟蹤對象) LK光流追蹤來檢驗.
  4. msckf-vio只保留雙目匹配上的點,而vins會保留那些只在單目看到的點,論文說單目的點可以約束狀態裏的旋轉姿態.

函數功能解讀

下面就各個函數模塊功能進行說明:
  cv::calcOpticalFlowPyrLK(prev_img, cur_img, prev_pts, cur_pts, status, err, cv::Size(21, 21), 3): 利用LK光流跟蹤算法進行兩幀間的特徵的跟蹤,既可以是前後兩幀,也可以是同時刻的雙目圖像,該算法效率遠遠高於基於描述子匹配的算法,其參數分別爲:<被跟蹤圖像,跟蹤圖像,被跟蹤的特徵點(是已知的),跟蹤到的特徵點(爲空),跟蹤狀態(爲空),錯誤(爲空),每層金字塔上的搜索範圍,金字塔層數(此處爲3+1層)>.
此外還有其他參數,更詳細介紹見opencv官網calcOpticalFlowPyrLK介紹

  cv::goodFeaturesToTrack(cur_img, n_pts, MAX_CNT - cur_pts.size(), 0.01, MIN_DIST, mask)
提取角點,參數分別爲<圖像,角點數組,提取數量,質量層次,角點之間最小距離,mask),更多介紹詳見opencv官網介紹goodfeaturestotrack以及LK算法博客.

後端

  VINS-FUSION一個優點就是後端集成了camera-IMU系統的初始化過程,能標定camera-imu之間的外參,尺度因子等,所以本文就從初始化和優化兩部分來進行講解.可以設定後端在前端之後,也可以設定後端在單獨的線程裏,後端封裝在一個函數裏Estimator::processMeasurements().此外由於涉及到不同的傳感器有不同的技術細節,所以也需要分相應的情況進行分別闡述.

初始化

單目VIO初始化

  初始化主要的任務是利用視覺和IMU鬆耦合方案,先sfm求滑動窗口(10個)內的所有幀位姿和所有三維路標點的位置,然後跟IMU預積分量對齊,求解重力方向,尺度因子,陀螺儀bias及每幀的初速度,也就是說以視覺信息爲基準來初始化imu的初始狀態.初始化流程如下:初始化流程圖

函數功能解讀

不同於msckf-vio,本套源碼嵌套函數非常多,下面就各個函數進行講解(有些函數參數名太長就沒有寫出來,對照源碼就能知道具體函數):

getIMUInterval(prevTime, curTime, accVector, gyrVector): 提取兩幀之間的IMU數據;

initFirstIMUPose(accVector): 靜態初始化,利用重力在w系和imu系下的向量表示不同來得到初始時刻IMU系的位姿Rs[0];

processIMU(): IMU預積分得到兩幀圖像之間的IMU預積分量pre_integrations[i], tmp_pre_integration,以及中值積分得到每幀圖像時刻IMU全局狀態Rs[i], Ps[i], Vs[i]; 注:只有frame_count!=0時纔會計算這些量,frame_count==0時預積分爲0,IMU全局狀態Rs[0]即爲初始狀態,不需要通過中值積分得到;

CalibrationExRotation(): 一共取WINDOW_SIZE(設爲10)個狀態, 也就是WINDOW_SIZE個圖像信息,相鄰圖像幀之間通過特徵點匹配計算F矩陣,進而恢復出兩兩幀之間的旋轉矩陣, IMU之間預積分算出兩兩幀IMU系之間的旋轉矩陣,匹配即可算出camera-imu之間的外參. 此處注意,視覺匹配求出的旋轉矩陣與imu積分得到的旋轉矩陣存在互逆的關係,所以源碼中會對求出的兩兩幀之間的視覺旋轉矩陣進行轉置(也就是求逆),這一步需要晃動相機,使得其位姿有明顯變化.

sfm.construct(): 首先在圖像幀窗口中由前往後(越老的幀越靠前)選出與當前幀(也就是最新幀)有一定數量的共視點且視差的幀L作爲參考幀;然後三角化+pnp計算出初始的位姿和三維點座標;最後Full-BA優化求解出更準確的位姿和三維點座標.注:這裏不是以第一幀爲參考幀,位姿裏的平移矩陣無尺度,旋轉矩陣變換到IMU繫了.

solveGyroscopeBias(all_image_frame, Bgs): 初始化陀螺儀bias,用到的數據是sfm得到的相鄰幀時刻imu的變換和IMU預積分得到的變換,以視覺爲基準,可以求解出陀螺儀的bias,這個bias在幀間是恆定值,不同的幀間則不同.

LinearAlignment(all_image_frame, g, x): 最小化相鄰兩幀之間IMU預積分得到的速度,位置增量與預測量(也就是要估計的量)算出的增量之間的殘差,可以求出初始幀裏的g,尺度和各幀時刻IMU的初速度,最後並對g進行Refine.

pre_integrations[i]->repropagate(Vector3d::Zero(), Bgs[i]): 利用之前求出的陀螺儀bias重新進行預積分,過程與之前一樣,注意這裏將加速度計bias設爲0.

optimization(): 視覺慣性聯合優化+邊緣化marginalize.

updateLatestStates(): 將最新幀的信息設爲latest的量.

slideWindow(): 移除被marginalize掉的幀的信息.

優化

初始化之後的後端優化流程圖(基本就是初始化的子圖):
後端優化流程圖
  整個代碼並不難懂,其中的難點在於其優化部分的代碼,也是整套代碼的核心部分. 涉及到優化的部分有初始化的sfm和後端優化optimization(),在此特意提取出這部分的代碼,進行額外的講解.

一, sfm

//full BA
	ceres::Problem problem;
	ceres::LocalParameterization* local_parameterization = new ceres::QuaternionParameterization();////注意這個參數
	//cout << " begin full BA " << endl;
    //step1: 加入待優化的參數:相機位姿
	for (int i = 0; i < frame_num; i++)
	{
		//double array for ceres
		c_translation[i][0] = c_Translation[i].x();
		c_translation[i][1] = c_Translation[i].y();
		c_translation[i][2] = c_Translation[i].z();
		c_rotation[i][0] = c_Quat[i].w();
		c_rotation[i][1] = c_Quat[i].x();
		c_rotation[i][2] = c_Quat[i].y();
		c_rotation[i][3] = c_Quat[i].z();
        problem.AddParameterBlock(c_rotation[i], 4, local_parameterization); //加入圖像幀初始化的位姿旋轉矩陣
        problem.AddParameterBlock(c_translation[i], 3);//加入圖像幀初始化的平移矩陣
		if (i == l)
		{
            problem.SetParameterBlockConstant(c_rotation[i]);//第l幀射設爲參考幀,其旋轉位姿固定不變
		}
		if (i == l || i == frame_num - 1)
		{
            problem.SetParameterBlockConstant(c_translation[i]); //第l和最新幀(當前幀)平移矩陣不變,爲何呢?
		}
	}

    //加入殘差項:重投影誤差    
	for (int i = 0; i < feature_num; i++)
	{
		if (sfm_f[i].state != true)
			continue;
		for (int j = 0; j < int(sfm_f[i].observation.size()); j++)
		{
            int l = sfm_f[i].observation[j].first; //觀測到該特徵點的圖像幀
            //殘差cost_function重投影誤差
			ceres::CostFunction* cost_function = ReprojectionError3D::Create(
												sfm_f[i].observation[j].second.x(),
												sfm_f[i].observation[j].second.y());

    		problem.AddResidualBlock(cost_function, NULL, c_rotation[l], c_translation[l], 
    								sfm_f[i].position);	 
		}

    }   //加入代價函數cost_function
	ceres::Solver::Options options;
	options.linear_solver_type = ceres::DENSE_SCHUR;
	//options.minimizer_progress_to_stdout = true;
	options.max_solver_time_in_seconds = 0.2;
	ceres::Solver::Summary summary;
    ceres::Solve(options, &problem, &summary);  //求解

其中殘差函數定義爲:

struct ReprojectionError3D
{
  ReprojectionError3D(double observed_u, double observed_v)
  	:observed_u(observed_u), observed_v(observed_v)
  	{}

  template <typename T>
  bool operator()(const T* const camera_R, const T* const camera_T, const T* point, T* residuals) const
  {
  	T p[3];
  	ceres::QuaternionRotatePoint(camera_R, point, p);
  	p[0] += camera_T[0]; p[1] += camera_T[1]; p[2] += camera_T[2];
  	T xp = p[0] / p[2];
  	T yp = p[1] / p[2];
  	residuals[0] = xp - T(observed_u);
  	residuals[1] = yp - T(observed_v);
  	return true;
  }

  static ceres::CostFunction* Create(const double observed_x,
                                     const double observed_y) 
  {
    return (new ceres::AutoDiffCostFunction<
            ReprojectionError3D, 2, 4, 3, 3>(
            	new ReprojectionError3D(observed_x,observed_y)));
  }

  double observed_u;
  double observed_v;
};

可以看出sfm中採用的是自動求導機制,這部分用的是ceres求解BA問題的標準流程,先添加待優化的變量,然後定義並添加殘差項,也就是重投影誤差,最後設置求解器進行求解,有關ceres這部分的教程詳見官網google ceres官方教程

二,optimization()

在此將整段核心代碼拆解如下:

1, 添加相機狀態變量參數

//添加相機狀態變量參數和相機內參變量參數
 for (int i = 0; i < frame_count + 1; i++)
  {
      ceres::LocalParameterization *local_parameterization = new PoseLocalParameterization(); //局部位姿參數
      problem.AddParameterBlock(para_Pose[i], SIZE_POSE, local_parameterization); //para_Pose:有IMU則爲IMU位置和姿態四元數,無IMU則爲相機的位姿
      if(USE_IMU)
          problem.AddParameterBlock(para_SpeedBias[i], SIZE_SPEEDBIAS);//para_SpeedBias:IMU初速度,陀螺儀加速度計bias
  }
  if(!USE_IMU)
      problem.SetParameterBlockConstant(para_Pose[0]);
//添加相機內參變量參數
 for (int i = 0; i < NUM_OF_CAM; i++) //NUM_OF_CAM相機數量
  {
      ceres::LocalParameterization *local_parameterization = new PoseLocalParameterization(); //局部位姿參數
      problem.AddParameterBlock(para_Ex_Pose[i], SIZE_POSE, local_parameterization);  //para_Ex_Pose:camera-imu之間的外參
      if ((ESTIMATE_EXTRINSIC && frame_count == WINDOW_SIZE && Vs[0].norm() > 0.2) || openExEstimation)   //這種條件下需要優化外參
      {
          //ROS_INFO("estimate extinsic param");
          openExEstimation = 1;
      }
      else
      {
          //ROS_INFO("fix extinsic param");
          problem.SetParameterBlockConstant(para_Ex_Pose[i]);
      }
  }

這裏需要注意"局部位姿參數"的定義,這部分的內容在ceres官網有詳細介紹 LocalParameterization,這個對象主要定義該參數的廣義加法和雅克比,如果不加這個對象,則使用默認的狹義加法(1+1=2)和雅克比計算方法,在這個參數裏有相機的外參旋轉四元數,它的加法不是狹義的加法,所以需要重新定義,代碼如下:

class PoseLocalParameterization : public ceres::LocalParameterization
{
  virtual bool Plus(const double *x, const double *delta, double *x_plus_delta) const;
  virtual bool ComputeJacobian(const double *x, double *jacobian) const;
  virtual int GlobalSize() const { return 7; };
  virtual int LocalSize() const { return 6; };
};

bool PoseLocalParameterization::Plus(const double *x, const double *delta, double *x_plus_delta) const
{
  Eigen::Map<const Eigen::Vector3d> _p(x);
  Eigen::Map<const Eigen::Quaterniond> _q(x + 3);
  Eigen::Map<const Eigen::Vector3d> dp(delta);
  Eigen::Quaterniond dq = Utility::deltaQ(Eigen::Map<const Eigen::Vector3d>(delta + 3));
  Eigen::Map<Eigen::Vector3d> p(x_plus_delta);
  Eigen::Map<Eigen::Quaterniond> q(x_plus_delta + 3);
  p = _p + dp;
  q = (_q * dq).normalized();  //四元數乘法
  return true;
}
bool PoseLocalParameterization::ComputeJacobian(const double *x, double *jacobian) const
{
  Eigen::Map<Eigen::Matrix<double, 7, 6, Eigen::RowMajor>> j(jacobian);
  j.topRows<6>().setIdentity();
  j.bottomRows<1>().setZero();
  return true;
}

2,加入邊緣化先驗殘差信息(後面有詳細介紹)

    if (last_marginalization_info && last_marginalization_info->valid)
    {
        // construct new marginlization_factor
        MarginalizationFactor *marginalization_factor = new MarginalizationFactor(last_marginalization_info);
        problem.AddResidualBlock(marginalization_factor, NULL,
                                 last_marginalization_parameter_blocks);
    }

3,IMU預積分殘差源碼及其數學推導

 if(USE_IMU) //加入預積分殘差
    {
        for (int i = 0; i < frame_count; i++)
        {
            int j = i + 1;
            if (pre_integrations[j]->sum_dt > 10.0)   //預積分時間太長,不使用
                continue;
            IMUFactor* imu_factor = new IMUFactor(pre_integrations[j]);
            problem.AddResidualBlock(imu_factor, NULL, para_Pose[i], para_SpeedBias[i], para_Pose[j], para_SpeedBias[j]);
        }
    }

其中的構建殘差的IMUFactor爲:

class IMUFactor : public ceres::SizedCostFunction<15, 7, 9, 7, 9>
{
  public:
    IMUFactor() = delete;
    IMUFactor(IntegrationBase* _pre_integration):pre_integration(_pre_integration)
    {
    }
    virtual bool Evaluate(double const *const *parameters, double *residuals, double **jacobians) const
    {
       //以下是待估計的參數:世界座標系下相鄰i,j兩幀的位姿,初速度和陀螺儀,加速度計bias
        Eigen::Vector3d Pi(parameters[0][0], parameters[0][1], parameters[0][2]);
        Eigen::Quaterniond Qi(parameters[0][6], parameters[0][3], parameters[0][4], parameters[0][5]);
        Eigen::Vector3d Vi(parameters[1][0], parameters[1][1], parameters[1][2]);
        Eigen::Vector3d Bai(parameters[1][3], parameters[1][4], parameters[1][5]);
        Eigen::Vector3d Bgi(parameters[1][6], parameters[1][7], parameters[1][8]);
        Eigen::Vector3d Pj(parameters[2][0], parameters[2][1], parameters[2][2]);
        Eigen::Quaterniond Qj(parameters[2][6], parameters[2][3], parameters[2][4], parameters[2][5]);
        Eigen::Vector3d Vj(parameters[3][0], parameters[3][1], parameters[3][2]);
        Eigen::Vector3d Baj(parameters[3][3], parameters[3][4], parameters[3][5]);
        Eigen::Vector3d Bgj(parameters[3][6], parameters[3][7], parameters[3][8]);

#if 0
        if ((Bai - pre_integration->linearized_ba).norm() > 0.10 ||
            (Bgi - pre_integration->linearized_bg).norm() > 0.01)
        {
            pre_integration->repropagate(Bai, Bgi);
        }
#endif
        //計算殘差
        Eigen::Map<Eigen::Matrix<double, 15, 1>> residual(residuals);
        residual = pre_integration->evaluate(Pi, Qi, Vi, Bai, Bgi,
                                            Pj, Qj, Vj, Baj, Bgj);
       //殘差乘以信息矩陣
        Eigen::Matrix<double, 15, 15> sqrt_info = Eigen::LLT<Eigen::Matrix<double, 15, 15>>(pre_integration->covariance.inverse()).matrixL().transpose();
        residual = sqrt_info * residual;
        // 求雅克比矩陣
        if (jacobians)
        {
            double sum_dt = pre_integration->sum_dt;
            Eigen::Matrix3d dp_dba = pre_integration->jacobian.template block<3, 3>(O_P, O_BA);
            Eigen::Matrix3d dp_dbg = pre_integration->jacobian.template block<3, 3>(O_P, O_BG);
            Eigen::Matrix3d dq_dbg = pre_integration->jacobian.template block<3, 3>(O_R, O_BG);
            Eigen::Matrix3d dv_dba = pre_integration->jacobian.template block<3, 3>(O_V, O_BA);
            Eigen::Matrix3d dv_dbg = pre_integration->jacobian.template block<3, 3>(O_V, O_BG);
            if (pre_integration->jacobian.maxCoeff() > 1e8 || pre_integration->jacobian.minCoeff() < -1e8)
            {
                ROS_WARN("numerical unstable in preintegration");
            }
            if (jacobians[0])
            {
                Eigen::Map<Eigen::Matrix<double, 15, 7, Eigen::RowMajor>> jacobian_pose_i(jacobians[0]);
                jacobian_pose_i.setZero();
                jacobian_pose_i.block<3, 3>(O_P, O_P) = -Qi.inverse().toRotationMatrix();
                jacobian_pose_i.block<3, 3>(O_P, O_R) = Utility::skewSymmetric(Qi.inverse() * (0.5 * G * sum_dt * sum_dt + Pj - Pi - Vi * sum_dt));

#if 0
            jacobian_pose_i.block<3, 3>(O_R, O_R) = -(Qj.inverse() * Qi).toRotationMatrix();
#else
                Eigen::Quaterniond corrected_delta_q = pre_integration->delta_q * Utility::deltaQ(dq_dbg * (Bgi - pre_integration->linearized_bg));
                jacobian_pose_i.block<3, 3>(O_R, O_R) = -(Utility::Qleft(Qj.inverse() * Qi) * Utility::Qright(corrected_delta_q)).bottomRightCorner<3, 3>();
#endif
                jacobian_pose_i.block<3, 3>(O_V, O_R) = Utility::skewSymmetric(Qi.inverse() * (G * sum_dt + Vj - Vi));
                jacobian_pose_i = sqrt_info * jacobian_pose_i;
                if (jacobian_pose_i.maxCoeff() > 1e8 || jacobian_pose_i.minCoeff() < -1e8)
                {
                    ROS_WARN("numerical unstable in preintegration");
                }
            }
            if (jacobians[1])
            {
                Eigen::Map<Eigen::Matrix<double, 15, 9, Eigen::RowMajor>> jacobian_speedbias_i(jacobians[1]);
                jacobian_speedbias_i.setZero();
                jacobian_speedbias_i.block<3, 3>(O_P, O_V - O_V) = -Qi.inverse().toRotationMatrix() * sum_dt;
                jacobian_speedbias_i.block<3, 3>(O_P, O_BA - O_V) = -dp_dba;
                jacobian_speedbias_i.block<3, 3>(O_P, O_BG - O_V) = -dp_dbg;

#if 0
            jacobian_speedbias_i.block<3, 3>(O_R, O_BG - O_V) = -dq_dbg;
#else               
            jacobian_speedbias_i.block<3, 3>(O_R, O_BG - O_V) = -Utility::Qleft(Qj.inverse() * Qi * pre_integration->delta_q).bottomRightCorner<3, 3>() * dq_dbg;
#endif

                jacobian_speedbias_i.block<3, 3>(O_V, O_V - O_V) = -Qi.inverse().toRotationMatrix();
                jacobian_speedbias_i.block<3, 3>(O_V, O_BA - O_V) = -dv_dba;
                jacobian_speedbias_i.block<3, 3>(O_V, O_BG - O_V) = -dv_dbg;
                jacobian_speedbias_i.block<3, 3>(O_BA, O_BA - O_V) = -Eigen::Matrix3d::Identity();
                jacobian_speedbias_i.block<3, 3>(O_BG, O_BG - O_V) = -Eigen::Matrix3d::Identity();
                jacobian_speedbias_i = sqrt_info * jacobian_speedbias_i;
            }
            if (jacobians[2])
            {
                Eigen::Map<Eigen::Matrix<double, 15, 7, Eigen::RowMajor>> jacobian_pose_j(jacobians[2]);
                jacobian_pose_j.setZero();
                jacobian_pose_j.block<3, 3>(O_P, O_P) = Qi.inverse().toRotationMatrix();
#if 0
            jacobian_pose_j.block<3, 3>(O_R, O_R) = Eigen::Matrix3d::Identity();
#else               
             Eigen::Quaterniond corrected_delta_q = pre_integration->delta_q * Utility::deltaQ(dq_dbg * (Bgi - pre_integration->linearized_bg));
             jacobian_pose_j.block<3, 3>(O_R, O_R) = Utility::Qleft(corrected_delta_q.inverse() * Qi.inverse() * Qj).bottomRightCorner<3, 3>();
#endif
                jacobian_pose_j = sqrt_info * jacobian_pose_j;
            }
            if (jacobians[3])
            {
                Eigen::Map<Eigen::Matrix<double, 15, 9, Eigen::RowMajor>> jacobian_speedbias_j(jacobians[3]);
                jacobian_speedbias_j.setZero();
                jacobian_speedbias_j.block<3, 3>(O_V, O_V - O_V) = Qi.inverse().toRotationMatrix();
                jacobian_speedbias_j.block<3, 3>(O_BA, O_BA - O_V) = Eigen::Matrix3d::Identity();
                jacobian_speedbias_j.block<3, 3>(O_BG, O_BG - O_V) = Eigen::Matrix3d::Identity();
                jacobian_speedbias_j = sqrt_info * jacobian_speedbias_j;
            }
        }

        return true;
    }
    IntegrationBase* pre_integration;
};

計算殘差代碼pre_integration->evaluate()

Eigen::Matrix<double, 15, 1> evaluate(const Eigen::Vector3d &Pi, const Eigen::Quaterniond &Qi, const Eigen::Vector3d &Vi, const Eigen::Vector3d &Bai, const Eigen::Vector3d &Bgi,
                                          const Eigen::Vector3d &Pj, const Eigen::Quaterniond &Qj, const Eigen::Vector3d &Vj, const Eigen::Vector3d &Baj, const Eigen::Vector3d &Bgj)
    {
        Eigen::Matrix<double, 15, 1> residuals;
        //用bias修正預積分量
        Eigen::Matrix3d dp_dba = jacobian.block<3, 3>(O_P, O_BA);
        Eigen::Matrix3d dp_dbg = jacobian.block<3, 3>(O_P, O_BG);

        Eigen::Matrix3d dq_dbg = jacobian.block<3, 3>(O_R, O_BG);

        Eigen::Matrix3d dv_dba = jacobian.block<3, 3>(O_V, O_BA);
        Eigen::Matrix3d dv_dbg = jacobian.block<3, 3>(O_V, O_BG);

        Eigen::Vector3d dba = Bai - linearized_ba;
        Eigen::Vector3d dbg = Bgi - linearized_bg;

        Eigen::Quaterniond corrected_delta_q = delta_q * Utility::deltaQ(dq_dbg * dbg);
        Eigen::Vector3d corrected_delta_v = delta_v + dv_dba * dba + dv_dbg * dbg;
        Eigen::Vector3d corrected_delta_p = delta_p + dp_dba * dba + dp_dbg * dbg;
        //計算殘差
        residuals.block<3, 1>(O_P, 0) = Qi.inverse() * (0.5 * G * sum_dt * sum_dt + Pj - Pi - Vi * sum_dt) - corrected_delta_p;
        residuals.block<3, 1>(O_R, 0) = 2 * (corrected_delta_q.inverse() * (Qi.inverse() * Qj)).vec();
        residuals.block<3, 1>(O_V, 0) = Qi.inverse() * (G * sum_dt + Vj - Vi) - corrected_delta_v;
        residuals.block<3, 1>(O_BA, 0) = Baj - Bai;
        residuals.block<3, 1>(O_BG, 0) = Bgj - Bgi;
        return residuals;
    }

有關這部分的數學知識點非常多,在此見深藍學院VIO課程的推導過程,貼圖如下(特別感謝相關的三位老師,高博,賀博和崔博):
預積分誤差公式
預積分誤差對bias的求導
預積分誤差對bias的求導
在這裏插入圖片描述
預積分誤差對bias的求導
預積分誤差對bias的求導
預積分誤差對bias的求導
預積分誤差對bias的求導

4,視覺重投影誤差源碼及數學推導

代碼如下:

for (auto &it_per_id : f_manager.feature) //提取每個特徵點
    {
        it_per_id.used_num = it_per_id.feature_per_frame.size(); //篩選出觀測幀大於等於4的特徵點來構建視覺殘差
         if (it_per_id.used_num < 4)
            continue; 
        ++feature_index;
        int imu_i = it_per_id.start_frame, imu_j = imu_i - 1;        
        Vector3d pts_i = it_per_id.feature_per_frame[0].point; //該幀在首幀歸一化平面上的位置
        for (auto &it_per_frame : it_per_id.feature_per_frame)
        {
            imu_j++;
            if (imu_i != imu_j) //加入視覺重投影信息,IMU狀態
            {
                Vector3d pts_j = it_per_frame.point;
                ProjectionTwoFrameOneCamFactor *f_td = new ProjectionTwoFrameOneCamFactor(pts_i, pts_j, it_per_id.feature_per_frame[0].velocity, it_per_frame.velocity,                                                                 it_per_id.feature_per_frame[0].cur_td, it_per_frame.cur_td);
                problem.AddResidualBlock(f_td, loss_function, para_Pose[imu_i], para_Pose[imu_j], para_Ex_Pose[0], para_Feature[feature_index], para_Td[0]);
            }
            if(STEREO && it_per_frame.is_stereo)
            {                
                Vector3d pts_j_right = it_per_frame.pointRight;
                if(imu_i != imu_j)
                {
                    ProjectionTwoFrameTwoCamFactor *f = new ProjectionTwoFrameTwoCamFactor(pts_i, pts_j_right, it_per_id.feature_per_frame[0].velocity, it_per_frame.velocityRight,                                                                 it_per_id.feature_per_frame[0].cur_td, it_per_frame.cur_td);
                    problem.AddResidualBlock(f, loss_function, para_Pose[imu_i], para_Pose[imu_j], para_Ex_Pose[0], para_Ex_Pose[1], para_Feature[feature_index], para_Td[0]);
                }
                else
                {
                    ProjectionOneFrameTwoCamFactor *f = new ProjectionOneFrameTwoCamFactor(pts_i, pts_j_right, it_per_id.feature_per_frame[0].velocity, it_per_frame.velocityRight,                                                                 it_per_id.feature_per_frame[0].cur_td, it_per_frame.cur_td);
                    problem.AddResidualBlock(f, loss_function, para_Ex_Pose[0], para_Ex_Pose[1], para_Feature[feature_index], para_Td[0]);
                }              
            }
            f_m_cnt++;
        }
    }

形式上與ceres添加殘差塊一致,其殘差塊ProjectionOneFrameTwoCamFactor()定義如下:

class ProjectionTwoFrameOneCamFactor : public ceres::SizedCostFunction<2, 7, 7, 7, 1, 1>
{
  public:
    ProjectionTwoFrameOneCamFactor(const Eigen::Vector3d &_pts_i, const Eigen::Vector3d &_pts_j,
    				   const Eigen::Vector2d &_velocity_i, const Eigen::Vector2d &_velocity_j,
    				   const double _td_i, const double _td_j);
    virtual bool Evaluate(double const *const *parameters, double *residuals, double **jacobians) const;
    void check(double **parameters);

    Eigen::Vector3d pts_i, pts_j;
    Eigen::Vector3d velocity_i, velocity_j;
    double td_i, td_j;
    Eigen::Matrix<double, 2, 3> tangent_base;
    static Eigen::Matrix2d sqrt_info;
    static double sum_t;
};


bool ProjectionTwoFrameOneCamFactor::Evaluate(double const *const *parameters, double *residuals, double **jacobians) const
{
    TicToc tic_toc;
    Eigen::Vector3d Pi(parameters[0][0], parameters[0][1], parameters[0][2]);
    Eigen::Quaterniond Qi(parameters[0][6], parameters[0][3], parameters[0][4], parameters[0][5]);

    Eigen::Vector3d Pj(parameters[1][0], parameters[1][1], parameters[1][2]);
    Eigen::Quaterniond Qj(parameters[1][6], parameters[1][3], parameters[1][4], parameters[1][5]);

    Eigen::Vector3d tic(parameters[2][0], parameters[2][1], parameters[2][2]);
    Eigen::Quaterniond qic(parameters[2][6], parameters[2][3], parameters[2][4], parameters[2][5]);

    double inv_dep_i = parameters[3][0];  //逆深度

    double td = parameters[4][0];

    Eigen::Vector3d pts_i_td, pts_j_td;
    pts_i_td = pts_i - (td - td_i) * velocity_i;
    pts_j_td = pts_j - (td - td_j) * velocity_j;
    Eigen::Vector3d pts_camera_i = pts_i_td / inv_dep_i;
    Eigen::Vector3d pts_imu_i = qic * pts_camera_i + tic;  //轉換到第i幀對應的imu系下
    Eigen::Vector3d pts_w = Qi * pts_imu_i + Pi;           //轉到世界系下
    Eigen::Vector3d pts_imu_j = Qj.inverse() * (pts_w - Pj);  //轉換到j幀對應的imu系下
    Eigen::Vector3d pts_camera_j = qic.inverse() * (pts_imu_j - tic);  //轉換到j幀歸一化平面上
    Eigen::Map<Eigen::Vector2d> residual(residuals);

#ifdef UNIT_SPHERE_ERROR 
    residual =  tangent_base * (pts_camera_j.normalized() - pts_j_td.normalized());  //使用了切平面上的殘差
#else
    double dep_j = pts_camera_j.z();
    residual = (pts_camera_j / dep_j).head<2>() - pts_j_td.head<2>();
#endif
    residual = sqrt_info * residual;

    if (jacobians)
    {
        Eigen::Matrix3d Ri = Qi.toRotationMatrix();
        Eigen::Matrix3d Rj = Qj.toRotationMatrix();
        Eigen::Matrix3d ric = qic.toRotationMatrix();
        Eigen::Matrix<double, 2, 3> reduce(2, 3);
#ifdef UNIT_SPHERE_ERROR
        double norm = pts_camera_j.norm();
        Eigen::Matrix3d norm_jaco;
        double x1, x2, x3;
        x1 = pts_camera_j(0);
        x2 = pts_camera_j(1);
        x3 = pts_camera_j(2);
        norm_jaco << 1.0 / norm - x1 * x1 / pow(norm, 3), - x1 * x2 / pow(norm, 3),            - x1 * x3 / pow(norm, 3),
                     - x1 * x2 / pow(norm, 3),            1.0 / norm - x2 * x2 / pow(norm, 3), - x2 * x3 / pow(norm, 3),
                     - x1 * x3 / pow(norm, 3),            - x2 * x3 / pow(norm, 3),            1.0 / norm - x3 * x3 / pow(norm, 3);
        reduce = tangent_base * norm_jaco;
#else
        reduce << 1. / dep_j, 0, -pts_camera_j(0) / (dep_j * dep_j),
            0, 1. / dep_j, -pts_camera_j(1) / (dep_j * dep_j);
#endif
        reduce = sqrt_info * reduce;

        if (jacobians[0])
        {
            Eigen::Map<Eigen::Matrix<double, 2, 7, Eigen::RowMajor>> jacobian_pose_i(jacobians[0]);

            Eigen::Matrix<double, 3, 6> jaco_i;
            jaco_i.leftCols<3>() = ric.transpose() * Rj.transpose();
            jaco_i.rightCols<3>() = ric.transpose() * Rj.transpose() * Ri * -Utility::skewSymmetric(pts_imu_i);

            jacobian_pose_i.leftCols<6>() = reduce * jaco_i;
            jacobian_pose_i.rightCols<1>().setZero();
        }
        if (jacobians[1])
        {
            Eigen::Map<Eigen::Matrix<double, 2, 7, Eigen::RowMajor>> jacobian_pose_j(jacobians[1]);
            Eigen::Matrix<double, 3, 6> jaco_j;
            jaco_j.leftCols<3>() = ric.transpose() * -Rj.transpose();
            jaco_j.rightCols<3>() = ric.transpose() * Utility::skewSymmetric(pts_imu_j);
            jacobian_pose_j.leftCols<6>() = reduce * jaco_j;
            jacobian_pose_j.rightCols<1>().setZero();
        }
        if (jacobians[2])
        {
            Eigen::Map<Eigen::Matrix<double, 2, 7, Eigen::RowMajor>> jacobian_ex_pose(jacobians[2]);
            Eigen::Matrix<double, 3, 6> jaco_ex;
            jaco_ex.leftCols<3>() = ric.transpose() * (Rj.transpose() * Ri - Eigen::Matrix3d::Identity());
            Eigen::Matrix3d tmp_r = ric.transpose() * Rj.transpose() * Ri * ric;
            jaco_ex.rightCols<3>() = -tmp_r * Utility::skewSymmetric(pts_camera_i) + Utility::skewSymmetric(tmp_r * pts_camera_i) +
                                     Utility::skewSymmetric(ric.transpose() * (Rj.transpose() * (Ri * tic + Pi - Pj) - tic));
            jacobian_ex_pose.leftCols<6>() = reduce * jaco_ex;
            jacobian_ex_pose.rightCols<1>().setZero();
        }
        if (jacobians[3])
        {
            Eigen::Map<Eigen::Vector2d> jacobian_feature(jacobians[3]);
            jacobian_feature = reduce * ric.transpose() * Rj.transpose() * Ri * ric * pts_i_td * -1.0 / (inv_dep_i * inv_dep_i);
        }
        if (jacobians[4])
        {
            Eigen::Map<Eigen::Vector2d> jacobian_td(jacobians[4]);
            jacobian_td = reduce * ric.transpose() * Rj.transpose() * Ri * ric * velocity_i / inv_dep_i * -1.0  +
                          sqrt_info * velocity_j.head(2);
        }
    }
    sum_t += tic_toc.toc();

    return true;
}

有關視覺殘差的數學公式推導如下:
視覺殘差1
視覺殘差2
視覺殘差3
視覺殘差4
視覺殘差5
視覺殘差6
視覺殘差7
視覺殘差8

5,求解

    ceres::Solver::Options options;

    options.linear_solver_type = ceres::DENSE_SCHUR;  //用舒爾補的方法進行求解
    //options.num_threads = 2;
    options.trust_region_strategy_type = ceres::DOGLEG; //使用dogleg方法
    options.max_num_iterations = NUM_ITERATIONS;  //優化最多迭代次數
    //options.use_explicit_schur_complement = true;
    //options.minimizer_progress_to_stdout = true;
    //options.use_nonmonotonic_steps = true;
    if (marginalization_flag == MARGIN_OLD)
        options.max_solver_time_in_seconds = SOLVER_TIME * 4.0 / 5.0;
    else
        options.max_solver_time_in_seconds = SOLVER_TIME;  //需要邊緣化最老幀的時候,最大優化時間要少點,給邊緣化預留點時間
    TicToc t_solver;
    ceres::Solver::Summary summary;
    ceres::Solve(options, &problem, &summary);

三,總結

綜上總結用ceres來計算最優化問題的一般步驟:

  1. 聲明問題ceres::Problem problem;
  2. 如果優化參數有一些非默認參數(比如固定值,非默認加法等),需要加入參數塊problem.AddParameterBlock(),且如果有非默認加法(比如四元數沒有狹義的加法,而是乘法),還需要定義局部參數ceres::LocalParameterization,以此定義廣義“加法”和擾動求導法則。
  3. 加入殘差塊problem.AddResidualBlock(),需要定義殘差,非自動求導的情況下還要定義雅可比矩陣。
  4. 設置求解選項,進行求解。
  5. 要注意加速技巧,這部分詳見ceres官網。

四,邊緣化留下的先驗信息及其代碼導讀

VINS裏,如果當前幀與上一幀視差明顯,則最新的第二幀設爲關鍵幀,並邊緣化掉最老的幀,這個時候就給下次優化留下了一些先驗測量信息;如果當前幀與上一幀視差不明顯,就去掉最新的第二幀,直接去掉所帶的信息,預積分留下加入到下一次預積分,而不會引入先驗殘差信息。
VINS採用Schur complement的方法進行邊緣化,有關該方法詳見SLAM中的marginalization 和 Schur complement,VINS中的基本思想是利用首幀觀測是邊緣化幀的特徵點以及IMU最老的兩幀之間的預積分,先按照求解優化的流程構建Ax=e, 然後用Schur complement的方法得到H x = b, 從H,b反推出ceres優化裏要用的殘差以及雅克比矩陣,最後在下一次優化的時候加入整個優化問題中. 接下來我們具體看看VINS源碼中是如何做的.

  1. step1: 加入上次marginalize之後保留狀態的先驗信息
       if (last_marginalization_info && last_marginalization_info->valid)
       {
           vector<int> drop_set;
           for (int i = 0; i < static_cast<int>(last_marginalization_parameter_blocks.size()); i++)
           {  //last_marginalization_parameter_blocks存有狀態窗口內所有狀態的首地址,每個狀態拆分爲para_Pose和para_SpeedBias兩大塊,所以其size大小爲2*WINDOW_SIZE
               if (last_marginalization_parameter_blocks[i] == para_Pose[0] ||
                   last_marginalization_parameter_blocks[i] == para_SpeedBias[0])
                   drop_set.push_back(i);  //選出要被marginalize 的最老幀的狀態信息
           }
           // construct new marginlization_factor
           MarginalizationFactor *marginalization_factor = new MarginalizationFactor(last_marginalization_info);
           ResidualBlockInfo *residual_block_info = new ResidualBlockInfo(marginalization_factor, NULL,
                                                                          last_marginalization_parameter_blocks,
                                                                          drop_set);
           marginalization_info->addResidualBlockInfo(residual_block_info);
       }

這一塊比較難懂,根據後面添加先驗信息代碼的類比,可以推斷出last_marginalization_info類似殘差裏的觀測值,比如IMU殘差裏的IMU預積分,last_marginalization_parameter_blocks是計算殘差和雅克比矩陣要用到的狀態參數,在IMU殘差裏就是預積分前後兩幀的狀態,下面我們來詳細看看涉及到的參數定義都是什麼:
**last_marginalization_parameter_blocks:**看其來路:

 std::unordered_map<long, double *> addr_shift;
        for (int i = 1; i <= WINDOW_SIZE; i++)
        {
            //reinterpret_cast把一個指針轉換成一個整數等
            addr_shift[reinterpret_cast<long>(para_Pose[i])] = para_Pose[i - 1];
            if(USE_IMU)
                addr_shift[reinterpret_cast<long>(para_SpeedBias[i])] = para_SpeedBias[i - 1];
        }
        for (int i = 0; i < NUM_OF_CAM; i++)
            addr_shift[reinterpret_cast<long>(para_Ex_Pose[i])] = para_Ex_Pose[i];

        addr_shift[reinterpret_cast<long>(para_Td[0])] = para_Td[0];

        vector<double *> parameter_blocks = marginalization_info->getParameterBlocks(addr_shift);  //提取出要保留的WINDOW_SIZE個相機位姿狀態量的地址
last_marginalization_parameter_blocks = parameter_blocks;  // //marginalize之後要被保留下來的狀態參數裏的相機位姿,IMU,Td和相機-IMU的外參
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
std::vector<double *> MarginalizationInfo::getParameterBlocks(std::unordered_map<long, double *> &addr_shift)
{
    std::vector<double *> keep_block_addr;
    keep_block_size.clear();
    keep_block_idx.clear();
    keep_block_data.clear();

    for (const auto &it : parameter_block_idx)
    {
        if (it.second >= m)
        {
            keep_block_size.push_back(parameter_block_size[it.first]);
            keep_block_idx.push_back(parameter_block_idx[it.first]);
            keep_block_data.push_back(parameter_block_data[it.first]);
            keep_block_addr.push_back(addr_shift[it.first]);  //marginalize之後要被保留下來的狀態參數地址
        }
    }
    sum_block_size = std::accumulate(std::begin(keep_block_size), std::end(keep_block_size), 0);

    return keep_block_addr;
}

可以得知last_marginalization_parameter_blocks保留有上次marginalize後要保留的部分狀態變量的地址. 怎麼理解呢? addr_shift保留有本次滑動窗口後要保留的狀態變量地址:除去最老幀的其他幀位姿,相機的外參,IMU-CAMERA時間校準量,單目IMU的情況下,

last_marginalization_info: 這個是上一次邊緣化信息,所以需要把整個marginalize部分看懂才能清楚這個的來路,接下來我們在代碼中追溯其去脈.

MarginalizationFactor::MarginalizationFactor(MarginalizationInfo* _marginalization_info):marginalization_info(_marginalization_info)
{
    int cnt = 0;
    for (auto it : marginalization_info->keep_block_size) //
    {
        mutable_parameter_block_sizes()->push_back(it);
        cnt += it;
    }
    //printf("residual size: %d, %d\n", cnt, n);
    set_num_residuals(marginalization_info->n);//這個殘差維度n就是上一次marginalize後要留下的狀態參數.
};

///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
bool MarginalizationFactor::Evaluate(double const *const *parameters, double *residuals, double **jacobians) const
{
    int n = marginalization_info->n;
    int m = marginalization_info->m;
    Eigen::VectorXd dx(n);
    for (int i = 0; i < static_cast<int>(marginalization_info->keep_block_size.size()); i++)
    {
        int size = marginalization_info->keep_block_size[i]; //
        int idx = marginalization_info->keep_block_idx[i] - m;
        Eigen::VectorXd x = Eigen::Map<const Eigen::VectorXd>(parameters[i], size); //parameters是所有狀態變量
        Eigen::VectorXd x0 = Eigen::Map<const Eigen::VectorXd>(marginalization_info->keep_block_data[i], size);
        if (size != 7)  //這是速度和bias,殘差可以直接相減
            dx.segment(idx, size) = x - x0;
        else  //這是位姿,包含旋轉四元數,前三個爲位置,可以直接相減,旋轉的殘差則不能直接相減
        {
            dx.segment<3>(idx + 0) = x.head<3>() - x0.head<3>();
            dx.segment<3>(idx + 3) = 2.0 * Utility::positify(Eigen::Quaterniond(x0(6), x0(3), x0(4), x0(5)).inverse() * Eigen::Quaterniond(x(6), x(3), x(4), x(5))).vec();
            if (!((Eigen::Quaterniond(x0(6), x0(3), x0(4), x0(5)).inverse() * Eigen::Quaterniond(x(6), x(3), x(4), x(5))).w() >= 0))
            {
                dx.segment<3>(idx + 3) = 2.0 * -Utility::positify(Eigen::Quaterniond(x0(6), x0(3), x0(4), x0(5)).inverse() * Eigen::Quaterniond(x(6), x(3), x(4), x(5))).vec();
            }
        }
    }
    Eigen::Map<Eigen::VectorXd>(residuals, n) = marginalization_info->linearized_residuals + marginalization_info->linearized_jacobians * dx;
    //linearized_residuals和linearized_jacobians是在上一優化步驟之後的marginalize裏計算得到的
    if (jacobians)
    {

        for (int i = 0; i < static_cast<int>(marginalization_info->keep_block_size.size()); i++)
        {
            if (jacobians[i])
            {
                int size = marginalization_info->keep_block_size[i], local_size = marginalization_info->localSize(size);
                int idx = marginalization_info->keep_block_idx[i] - m;
                Eigen::Map<Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>> jacobian(jacobians[i], n, size);
                jacobian.setZero();
                jacobian.leftCols(local_size) = marginalization_info->linearized_jacobians.middleCols(idx, local_size);
            }
        }
    }
    return true;
}

通過以上代碼可以看出,MarginalizationFactor計算出的殘差就是狀態量的殘差,其雅克比矩陣是該殘差對狀態量的求導雅克比矩陣(這個雅克比矩陣不是計算的到的,而是由Schur complement計算出來的雅克比矩陣傳遞下來的),綜合來看,上一次marginalize後保留的的狀態參數P(包括camera-imu外參,時間校準參數Td,要保留的相機位姿T_cameras),此次優化過程依舊會保留下來,上一次的值就是此次優化的先驗信息,這個先驗信息的加入,將使得這些參數在優化過程中不會劇烈變化,可以說起到一個smooth的作用.marginalize最新第二幀和最老的一幀,都是有這個先驗信息.

  1. step2 邊緣化掉的最老幀以及預積分會給第二老的幀留下先驗信息:
        if(USE_IMU)
        {
            if (pre_integrations[1]->sum_dt < 10.0)
            {
                IMUFactor* imu_factor = new IMUFactor(pre_integrations[1]);
                ResidualBlockInfo *residual_block_info = new ResidualBlockInfo(imu_factor, NULL,
                                                                           vector<double *>{para_Pose[0], para_SpeedBias[0], para_Pose[1], para_SpeedBias[1]},
                                                                           vector<int>{0, 1});
                marginalization_info->addResidualBlockInfo(residual_block_info);  //這裏並不會增加要被marginalize掉的狀態的數量,因爲要增加的para_Pose[0], para_SpeedBias[0]在上一步已經增加了
            }
        }

此處殘差定義與優化時的是一樣的,可以這麼理解: 該先驗信息方程的雅克比J和殘差b,b爲當前狀態下最老兩幀位姿P_old1,P_old2之差與預積分的差,J爲當前狀態下b對P_old1,P_old2的求導雅克比,b的維度爲15(位置3+姿態3+速度3+imu bias6,雖然姿態是個四元數,但它們的殘差是三維的),J是一個15*32的矩陣,代碼裏計算J的時候分成左右i,j兩部分.

  1. step3: 要marinalize的最老幀設爲F_old, 首次觀測在F_old的特徵點爲P_old,P_old還在其他幀F_other被觀測到,則F_old,P_old被marginalize掉後會給F_other留下先驗信息,這個先驗信息與優化裏計算視覺殘差及雅克比是一樣的,只是參數都變成優化後的了.
//marginalize掉那些觀測到該點的首幀要被邊緣化的特徵點,操作類似於上文中向ceres::problem中添加視覺觀測信息
        {
            int feature_index = -1;
            for (auto &it_per_id : f_manager.feature)
            {
                it_per_id.used_num = it_per_id.feature_per_frame.size();
                if (it_per_id.used_num < 4)
                    continue;

                ++feature_index;

                int imu_i = it_per_id.start_frame, imu_j = imu_i - 1;
                if (imu_i != 0)  //只marginalize觀測到該點的首幀是第0幀的特徵點
                    continue;

                Vector3d pts_i = it_per_id.feature_per_frame[0].point;  //首幀歸一化平面上的位置

                for (auto &it_per_frame : it_per_id.feature_per_frame)
                {
                    imu_j++;
                    if(imu_i != imu_j)
                    {
                        Vector3d pts_j = it_per_frame.point;
                        ProjectionTwoFrameOneCamFactor *f_td = new ProjectionTwoFrameOneCamFactor(pts_i, pts_j, it_per_id.feature_per_frame[0].velocity, it_per_frame.velocity,
                                                                          it_per_id.feature_per_frame[0].cur_td, it_per_frame.cur_td);
                        ResidualBlockInfo *residual_block_info = new ResidualBlockInfo(f_td, loss_function,
                                                                                        vector<double *>{para_Pose[imu_i], para_Pose[imu_j], para_Ex_Pose[0], para_Feature[feature_index], para_Td[0]},
                                                                                        vector<int>{0, 3});
                        marginalization_info->addResidualBlockInfo(residual_block_info);
                    }
                    if(STEREO && it_per_frame.is_stereo)
                    {
                        Vector3d pts_j_right = it_per_frame.pointRight;
                        if(imu_i != imu_j)
                        {
                            ProjectionTwoFrameTwoCamFactor *f = new ProjectionTwoFrameTwoCamFactor(pts_i, pts_j_right, it_per_id.feature_per_frame[0].velocity, it_per_frame.velocityRight,
                                                                          it_per_id.feature_per_frame[0].cur_td, it_per_frame.cur_td);
                            ResidualBlockInfo *residual_block_info = new ResidualBlockInfo(f, loss_function,
                                                                                           vector<double *>{para_Pose[imu_i], para_Pose[imu_j], para_Ex_Pose[0], para_Ex_Pose[1], para_Feature[feature_index], para_Td[0]},
                                                                                           vector<int>{0, 4});
                            marginalization_info->addResidualBlockInfo(residual_block_info);
                        }
                        else
                        {
                            ProjectionOneFrameTwoCamFactor *f = new ProjectionOneFrameTwoCamFactor(pts_i, pts_j_right, it_per_id.feature_per_frame[0].velocity, it_per_frame.velocityRight,
                                                                          it_per_id.feature_per_frame[0].cur_td, it_per_frame.cur_td);
                            ResidualBlockInfo *residual_block_info = new ResidualBlockInfo(f, loss_function,
                                                                                           vector<double *>{para_Ex_Pose[0], para_Ex_Pose[1], para_Feature[feature_index], para_Td[0]},
                                                                                           vector<int>{2});
                            marginalization_info->addResidualBlockInfo(residual_block_info);
                        }
                    }
                }
            }
        }
  1. step4:預邊緣化 marginalization_info->marginalize(),也就是調用各個誤差塊的evaluate函數計算其對應殘差塊和雅克比矩陣
void MarginalizationInfo::preMarginalize()
{
    for (auto it : factors)
    {//factors有IMU預積分殘差塊imu_factor,視覺重投影殘差塊ProjectionTwoFrameOneCamFactor和marginalization_factor
        it->Evaluate();//計算殘差塊的殘差及雅克比矩陣

        std::vector<int> block_sizes = it->cost_function->parameter_block_sizes();//比如imu_factor就涉及到前後兩個相機位姿
        for (int i = 0; i < static_cast<int>(block_sizes.size()); i++)
        {
            long addr = reinterpret_cast<long>(it->parameter_blocks[i]);
            int size = block_sizes[i];
            if (parameter_block_data.find(addr) == parameter_block_data.end())
            {//如果當前parameter_block_data裏沒有it->parameter_blocks,就在parameter_block_data里加入該參數塊
                double *data = new double[size];
                memcpy(data, it->parameter_blocks[i], sizeof(double) * size);
                parameter_block_data[addr] = data;  //addr是地址,data是地址存放的數據
            }
        }
    }
}
  1. step5:邊緣化marginalization_info->marginalize(),計算優化過程中用到的Hx=b,並分解得到相應雅克比矩陣和殘差.
void MarginalizationInfo::marginalize()
{
    int pos = 0;
    for (auto &it : parameter_block_idx) //要被marginalize掉的狀態量:一個位姿+一堆特徵點
    {
        it.second = pos;
        pos += localSize(parameter_block_size[it.first]);
    }

    m = pos;
    for (const auto &it : parameter_block_size)  //要保留的狀態量?
    {
        if (parameter_block_idx.find(it.first) == parameter_block_idx.end())//不在被marginalize掉的狀態裏的狀態
        {
            parameter_block_idx[it.first] = pos;
            pos += localSize(it.second);
        }
    }

    n = pos - m;
    if(m == 0)
    {
        valid = false;
        printf("unstable tracking...\n");
        return;
    }
    TicToc t_summing;
    Eigen::MatrixXd A(pos, pos);
    Eigen::VectorXd b(pos);
    A.setZero();
    b.setZero();
//multi thread
//計算A,b矩陣, 優化問題最後會轉化成Ax=b, A=J.transpose()*J,b=-J.transpose()*e
    TicToc t_thread_summing;
    pthread_t tids[NUM_THREADS];
    ThreadsStruct threadsstruct[NUM_THREADS];
    int i = 0;
    for (auto it : factors)
    {
        threadsstruct[i].sub_factors.push_back(it);
        i++;
        i = i % NUM_THREADS;
    }
    for (int i = 0; i < NUM_THREADS; i++)
    {
        TicToc zero_matrix;
        threadsstruct[i].A = Eigen::MatrixXd::Zero(pos,pos);
        threadsstruct[i].b = Eigen::VectorXd::Zero(pos);
        threadsstruct[i].parameter_block_size = parameter_block_size;
        threadsstruct[i].parameter_block_idx = parameter_block_idx;
        int ret = pthread_create( &tids[i], NULL, ThreadsConstructA ,(void*)&(threadsstruct[i]));
        if (ret != 0)
        {
            ROS_WARN("pthread_create error");
            ROS_BREAK();
        }
    }
    for( int i = NUM_THREADS - 1; i >= 0; i--)  
    {
        pthread_join( tids[i], NULL ); 
        A += threadsstruct[i].A;
        b += threadsstruct[i].b;
    }

    //TODO
    Eigen::MatrixXd Amm = 0.5 * (A.block(0, 0, m, m) + A.block(0, 0, m, m).transpose());
    Eigen::SelfAdjointEigenSolver<Eigen::MatrixXd> saes(Amm);

    Eigen::MatrixXd Amm_inv = saes.eigenvectors() * Eigen::VectorXd((saes.eigenvalues().array() > eps).select(saes.eigenvalues().array().inverse(), 0)).asDiagonal() * saes.eigenvectors().transpose();
    Eigen::VectorXd bmm = b.segment(0, m);
    Eigen::MatrixXd Amr = A.block(0, m, m, n);
    Eigen::MatrixXd Arm = A.block(m, 0, n, m);
    Eigen::MatrixXd Arr = A.block(m, m, n, n);//提取起始於m,m,大小爲n*n的子矩陣
    Eigen::VectorXd brr = b.segment(m, n);
    A = Arr - Arm * Amm_inv * Amr;
    b = brr - Arm * Amm_inv * bmm;
    //分解A,b得到ceres計算最小二乘問題裏需要的雅克比矩陣和殘差
    Eigen::SelfAdjointEigenSolver<Eigen::MatrixXd> saes2(A);
    Eigen::VectorXd S = Eigen::VectorXd((saes2.eigenvalues().array() > eps).select(saes2.eigenvalues().array(), 0));
    Eigen::VectorXd S_inv = Eigen::VectorXd((saes2.eigenvalues().array() > eps).select(saes2.eigenvalues().array().inverse(), 0));
    Eigen::VectorXd S_sqrt = S.cwiseSqrt();
    Eigen::VectorXd S_inv_sqrt = S_inv.cwiseSqrt();
    linearized_jacobians = S_sqrt.asDiagonal() * saes2.eigenvectors().transpose();
    linearized_residuals = S_inv_sqrt.asDiagonal() * saes2.eigenvectors().transpose() * b;
}
  1. step6:將該先驗信息加入到下一優化步驟
if (last_marginalization_info && last_marginalization_info->valid)
    {
        // construct new marginlization_factor
        MarginalizationFactor *marginalization_factor = new MarginalizationFactor(last_marginalization_info);
        problem.AddResidualBlock(marginalization_factor, NULL,
                                 last_marginalization_parameter_blocks);//last_marginalization_parameter_blocks上一次marginalize後要留下來的幀
    }

細心的朋友會發現,這個先驗信息的定義,與之前殘差塊定義是一致的,後者定義的是一個子殘差塊,前者則是整個的殘差塊合在一起形成的先驗信息.

Shur complement數學推導

再次感謝深藍學院VIO主講老師,在這裏直接貼圖:
邊緣化
邊緣化
邊緣化
邊緣化
邊緣化
邊緣化
邊緣化
邊緣化

具體到VINS裏其滑動窗口算法如下,由於ceres求解最小二乘的機制,代碼裏將信息矩陣與雅可比和殘差進行合併計算,所以看不出有額外計算信息矩陣的部分:
滑動窗口1
滑動窗口2
滑動窗口3
滑動窗口4

五, 耗時分析

過程 平均耗時(ms)
前端平均耗時 18.8589
初始化耗時 1116.17
後端優化部分 17.6578
marginalization() 19.1007
後端ceres優化部分 24.2203
後端整個optimization 45.8803
整個後端平均耗時 47.725ms

備註:
用了四捨五入,可能total_time會比前面的和小一點點。
測試條件:Dell G7-7588 ; intel i7-8750H @2.2GHz; ubuntu6.04
數據集:vins-fusion的github上推薦的EuRoC ROS Bags其中的MH_01,一共將近3700幀圖像。
相機內參和相機-IMU之間的外參都是已知的,初始化過程中沒有估計這個參數,後端一共處理1831次,前端處理3682次,說明幀率高於後端處理速度時,會丟失圖像信息.

六, C++知識點總結

本套代碼大量應用了C++指針和引用編程,故也是一次學習指針引用編程的好機會.
一陣忙一陣閒,先發布了,本博客持續更新,若發現錯誤,歡迎批評指正

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章