VINS 優化
@(VINS)[VINS]
IMU優化
已知條件:
( 1 ) { R w b k p b k + 1 w = R w b k ( p b k w + v b k w Δ t k − 1 2 g w Δ t k 2 ) + α b k + 1 b k R w b k v b k + 1 w = R w b k ( v b k w − g w Δ t k ) + β b k + 1 b k q w b k ⊗ q b k + 1 w = γ b k + 1 b k \begin{cases}
R_w^{b_k} p_{b_{k+1}}^w = R_w^{b_k} \left(p_{b_k}^w + v_{b_k}^w {\Delta t}_k - \frac{1}{2}g^w {\Delta t}_k^2\right) + {\alpha}_{b_{k+1}}^{b_k} \\
R_w^{b_k} v_{b_{k+1}}^w = R_w^{b_k} \left(v_{b_k}^w - g^w {\Delta t}_k\right) + {\beta}_{b_{k+1}}^{b_k} \\
q_w^{b_k} \otimes q_{b_{k+1}}^w = {\gamma}_{b_{k+1}}^{b_k}
\end{cases} \tag{$1$} ⎩ ⎪ ⎨ ⎪ ⎧ R w b k p b k + 1 w = R w b k ( p b k w + v b k w Δ t k − 2 1 g w Δ t k 2 ) + α b k + 1 b k R w b k v b k + 1 w = R w b k ( v b k w − g w Δ t k ) + β b k + 1 b k q w b k ⊗ q b k + 1 w = γ b k + 1 b k ( 1 )
以及雅各比矩陣和協方差矩陣:(具體計算見 “VINS 預積分 雅各比矩陣和協方差矩陣推導過程” )
{ J k = [ J α J β J γ J b a J b g ] = [ I f 01 δ t − 1 4 ( q k + q k + 1 ) δ t 2 f 04 0 I − [ ω k + 1 + ω k 2 − b w k ] × δ t 0 0 − δ t 0 f 21 I − 1 2 ( q k + q k + 1 ) δ t f 24 0 0 0 I 0 0 0 0 0 I ] P b k \begin{cases}
J_{k} =
\begin{bmatrix}
J^{\alpha} \\
J^{\beta} \\
J^{\gamma} \\
J^{b_a} \\
J^{b_g}
\end{bmatrix} =
\begin{bmatrix}
I & f_{01} & \delta t & -\frac{1}{4}\left(q_k + q_{k+1}\right){\delta t}^2 & f_{04} \\
0 & I - \left[\frac{\omega_{k+1} + \omega_k}{2} - b_{w_k}\right]_{\times}\delta t & 0 & 0 & -\delta t \\
0 & f_{21} & I & -\frac{1}{2}\left(q_k + q_{k+1}\right)\delta t & f_{24} \\
0 & 0 & 0 & I & 0 \\
0 & 0 & 0 & 0 & I
\end{bmatrix} \\
P_{b_k}
\end{cases}
⎩ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎧ J k = ⎣ ⎢ ⎢ ⎢ ⎢ ⎡ J α J β J γ J b a J b g ⎦ ⎥ ⎥ ⎥ ⎥ ⎤ = ⎣ ⎢ ⎢ ⎢ ⎢ ⎡ I 0 0 0 0 f 0 1 I − [ 2 ω k + 1 + ω k − b w k ] × δ t f 2 1 0 0 δ t 0 I 0 0 − 4 1 ( q k + q k + 1 ) δ t 2 0 − 2 1 ( q k + q k + 1 ) δ t I 0 f 0 4 − δ t f 2 4 0 I ⎦ ⎥ ⎥ ⎥ ⎥ ⎤ P b k
關於偏差一階近似可以寫成:
( 2 ) { α b k + 1 b k ≈ α ^ b k + 1 b k + J b a α δ b a k + J b ω α δ b ω k β b k + 1 b k ≈ β ^ b k + 1 b k + J b a β δ b a k + J b ω β δ b ω k γ b k + 1 b k ≈ γ ^ b k + 1 b k ⊗ [ 1 1 2 J b ω γ δ b ω k ] \begin{cases}
{\alpha}_{b_{k+1}}^{b_k} \approx {\hat{\alpha}}_{b_{k+1}}^{b_k} + J_{b_a}^{\alpha} \delta b_{a_k} + J_{b_{\omega}}^{\alpha} \delta b_{{\omega}_k} \\
{\beta}_{b_{k+1}}^{b_k} \approx {\hat{\beta}}_{b_{k+1}}^{b_k} + J_{b_a}^{\beta} \delta b_{a_k} + J_{b_{\omega}}^{\beta} \delta b_{{\omega}_k} \\
{\gamma}_{b_{k+1}}^{b_k} \approx {\hat{\gamma}}_{b_{k+1}}^{b_k} \otimes \begin{bmatrix}
1 \\
\frac{1}{2}J_{b_{\omega}}^{\gamma} \delta b_{{\omega}_k}
\end{bmatrix}
\end{cases} \tag{$2$}
⎩ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎧ α b k + 1 b k ≈ α ^ b k + 1 b k + J b a α δ b a k + J b ω α δ b ω k β b k + 1 b k ≈ β ^ b k + 1 b k + J b a β δ b a k + J b ω β δ b ω k γ b k + 1 b k ≈ γ ^ b k + 1 b k ⊗ [ 1 2 1 J b ω γ δ b ω k ] ( 2 )
根據 ( 1 ) \left(1\right) ( 1 ) 式我們同樣可以得到 α b k + 1 b k , β b k + 1 b k , γ b k + 1 b k {\alpha}_{b_{k+1}}^{b_k}, \ {\beta}_{b_{k+1}}^{b_k}, \ {\gamma}_{b_{k+1}}^{b_k} α b k + 1 b k , β b k + 1 b k , γ b k + 1 b k
( 3 ) { α b k + 1 b k = R w b k ( p b k + 1 w − p b k w + 1 2 g w Δ t k 2 − v b k w Δ t k ) β b k + 1 b k = R w b k ( v b k + 1 w + g w Δ t k − v b k w ) γ b k + 1 b k = q b k w − 1 ⊗ q b k + 1 w \begin{cases}
{\alpha}_{b_{k+1}}^{b_k} = R_{w}^{b_k}\left(p_{b_{k+1}}^w - p_{b_k}^w + \frac{1}{2}g^w{{\Delta t}_k}^2 - v_{b_k}^w {\Delta t}_k\right) \\
{\beta}_{b_{k+1}}^{b_k} = R_w^{b_k}\left(v_{b_{k+1}}^w + g^w \Delta t_k - v_{b_k}^w\right) \\
{\gamma}_{b_{k+1}}^{b_k} = {q_{b_k}^w}^{-1} \otimes q_{b_{k+1}}^w
\end{cases} \tag{$3$} ⎩ ⎪ ⎪ ⎨ ⎪ ⎪ ⎧ α b k + 1 b k = R w b k ( p b k + 1 w − p b k w + 2 1 g w Δ t k 2 − v b k w Δ t k ) β b k + 1 b k = R w b k ( v b k + 1 w + g w Δ t k − v b k w ) γ b k + 1 b k = q b k w − 1 ⊗ q b k + 1 w ( 3 )
通過 ( 3 ) − ( 2 ) \left(3\right) - \left(2\right) ( 3 ) − ( 2 ) 式計算殘差:
r e s i d u a l 15 × 1 = [ δ α b k + 1 b k δ θ b k + 1 b k δ β b k + 1 b k δ b a δ b g ] = [ R w b k ( p b k + 1 w − p b k w + 1 2 g w Δ t k 2 − v b k w Δ t k ) − ( α ^ b k + 1 b k + J b a α δ b a k + J b ω α δ b ω k ) 2 [ ( γ ^ b k + 1 b k ⊗ [ 1 1 2 J b ω γ δ b ω k ] ) − 1 ⊗ ( q b k w − 1 ⊗ q b k + 1 w ) ] x y z R w b k ( v b k + 1 w + g w Δ t k − v b k w ) − ( β ^ b k + 1 b k + J b a β δ b a k + J b ω β δ b ω k ) b a b k + 1 − b a b k b ω b k + 1 − b ω b k ] residual_{15 \times 1} = \begin{bmatrix}
{\delta {\alpha}}_{b_{k+1}}^{b_k} \\
{\delta {\theta}}_{b_{k+1}}^{b_k} \\
{\delta {\beta}}_{b_{k+1}}^{b_k} \\
\delta b_a \\
\delta b_g
\end{bmatrix} =
\begin{bmatrix}
R_{w}^{b_k}\left(p_{b_{k+1}}^w - p_{b_k}^w + \frac{1}{2}g^w{{\Delta t}_k}^2 - v_{b_k}^w {\Delta t}_k\right) - \left({\hat{\alpha}}_{b_{k+1}}^{b_k} + J_{b_a}^{\alpha} \delta b_{a_k} + J_{b_{\omega}}^{\alpha} \delta b_{{\omega}_k}\right)\\
2 \
\begin{bmatrix}
\left({\hat{\gamma}}_{b_{k+1}}^{b_k} \otimes \begin{bmatrix}
1 \\
\frac{1}{2}J_{b_{\omega}}^{\gamma} \delta b_{{\omega}_k}
\end{bmatrix}\right)^{-1} \otimes \left({q_{b_k}^w}^{-1} \otimes q_{b_{k+1}}^w \right)
\end{bmatrix}_{xyz} \\
R_w^{b_k}\left(v_{b_{k+1}}^w + g^w \Delta t_k - v_{b_k}^w\right) - \left({\hat{\beta}}_{b_{k+1}}^{b_k} + J_{b_a}^{\beta} \delta b_{a_k} + J_{b_{\omega}}^{\beta} \delta b_{{\omega}_k}\right)\\
{b_a}_{b_{k+1}} - {b_a}_{b_k} \\
{b_{\omega}}_{b_{k+1}} - {b_{\omega}}_{b_k}
\end{bmatrix}
r e s i d u a l 1 5 × 1 = ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎡ δ α b k + 1 b k δ θ b k + 1 b k δ β b k + 1 b k δ b a δ b g ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎤ = ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎡ R w b k ( p b k + 1 w − p b k w + 2 1 g w Δ t k 2 − v b k w Δ t k ) − ( α ^ b k + 1 b k + J b a α δ b a k + J b ω α δ b ω k ) 2 [ ( γ ^ b k + 1 b k ⊗ [ 1 2 1 J b ω γ δ b ω k ] ) − 1 ⊗ ( q b k w − 1 ⊗ q b k + 1 w ) ] x y z R w b k ( v b k + 1 w + g w Δ t k − v b k w ) − ( β ^ b k + 1 b k + J b a β δ b a k + J b ω β δ b ω k ) b a b k + 1 − b a b k b ω b k + 1 − b ω b k ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎤
根據 r e s i d u a l residual r e s i d u a l 公式計算對應的雅各比矩陣
注:R ˙ = ϕ ∧ R , [ q ] L = q w I + [ 0 − q v T q v [ q v ] × ] , [ q ] R = q w I + [ 0 − q v T q v − [ q v ] × ] \dot{R}=\phi^{\wedge}R, \ \left[q\right]_L = q_w I + \begin{bmatrix}
0 & -{q_v}^T \\
q_v & \left[q_v\right]_{\times}
\end{bmatrix}, \ \left[q\right]_R = q_w I + \begin{bmatrix}
0 & -{q_v}^T \\
q_v & -\left[q_v\right]_{\times}
\end{bmatrix} R ˙ = ϕ ∧ R , [ q ] L = q w I + [ 0 q v − q v T [ q v ] × ] , [ q ] R = q w I + [ 0 q v − q v T − [ q v ] × ]
求第 i i i 幀 ( j = i + 1 ) (j = i + 1) ( j = i + 1 ) 的位姿的雅各比矩陣
J p o s e i = [ − R i − 1 [ R i − 1 ∗ ( 0.5 G Δ t 2 + P j − P i − V i Δ t ) ] × 0 3 × 1 0 3 × 3 [ − [ Q j − 1 Q i ] L [ δ q ] R ] b r 3 × 3 0 3 × 1 0 3 × 3 [ Q i − 1 ( G Δ t + V j − V i ) ] × 0 3 × 1 0 3 × 3 0 3 × 3 0 3 × 1 0 3 × 3 0 3 × 3 0 3 × 1 ] J_{pose_i} = \begin{bmatrix}
-{R_i}^{-1} & \left[R_i^{-1} * \left(0.5G {\Delta t}^2 + P_j - P_i - V_i \Delta t\right)\right]_{\times} & 0_{3 \times 1} \\
0_{3 \times 3} & \left[-\left[{Q_j}^{-1} Q_i\right]_L \left[\delta q\right]_R\right]_{{br}_{3 \times 3}} & 0_{3 \times 1} \\
0_{3 \times 3} & \left[{Q_i}^{-1} \left(G \Delta t + V_j - V_i\right)\right]_{\times} & 0_{3 \times 1} \\
0_{3 \times 3} & 0_{3 \times 3} & 0_{3 \times 1} \\
0_{3 \times 3} & 0_{3 \times 3} & 0_{3 \times 1}
\end{bmatrix} J p o s e i = ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎡ − R i − 1 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 [ R i − 1 ∗ ( 0 . 5 G Δ t 2 + P j − P i − V i Δ t ) ] × [ − [ Q j − 1 Q i ] L [ δ q ] R ] b r 3 × 3 [ Q i − 1 ( G Δ t + V j − V i ) ] × 0 3 × 3 0 3 × 3 0 3 × 1 0 3 × 1 0 3 × 1 0 3 × 1 0 3 × 1 ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎤
其中:
δ b ω = B ω i − B ω j \delta b_{\omega} = B_{\omega_i} - B_{\omega_j} δ b ω = B ω i − B ω j
δ q = γ ^ j i 1 2 J b ω γ δ b ω \delta q = \hat{\gamma}_j^i\frac{1}{2}J_{b_{\omega}}^{\gamma} \delta b_{\omega} δ q = γ ^ j i 2 1 J b ω γ δ b ω
求第 i i i 幀 ( j = i + 1 ) (j = i + 1) ( j = i + 1 ) 的 v b a b ω v \ \ b_{a} \ \ b_{\omega} v b a b ω 的雅各比矩陣
J s p e e d i = [ − R i − 1 Δ t − J b a α − J b ω α 0 0 [ − [ Q j − 1 Q i δ q ] L ] b r 3 × 3 J b ω α R i − 1 − J b a β − J b ω β 0 − I 0 0 0 − I ]
J_{speed_i} = \begin{bmatrix}
-{R_i}^{-1} \Delta t & -J_{b_a}^{\alpha} & -J_{b_{\omega}}^{\alpha} \\
0 & 0 & \left[-\left[{Q_j}^{-1}Q_i \ \delta q\right]_L\right]_{br_{3 \times 3}}J_{b_{\omega}}^{\alpha} \\
R_i^{-1} & -J_{b_a}^{\beta} & -J_{b_{\omega}}^{\beta} \\
0 & -I & 0 \\
0 & 0 & -I
\end{bmatrix}
J s p e e d i = ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎡ − R i − 1 Δ t 0 R i − 1 0 0 − J b a α 0 − J b a β − I 0 − J b ω α [ − [ Q j − 1 Q i δ q ] L ] b r 3 × 3 J b ω α − J b ω β 0 − I ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎤
求第 j j j 幀 ( j = i + 1 ) (j = i + 1) ( j = i + 1 ) 的位姿的雅各比矩陣
J p o s e j = [ R i − 1 0 3 × 3 0 3 × 1 0 3 × 3 [ [ δ q − 1 Q i − 1 Q j ] L ] b r 3 × 3 0 3 × 1 0 3 × 3 0 3 × 3 0 3 × 1 0 3 × 3 0 3 × 3 0 3 × 1 0 3 × 3 0 3 × 3 0 3 × 1 ] J_{pose_j} = \begin{bmatrix}
{R_i}^{-1} & 0_{3 \times 3} & 0_{3 \times 1} \\
0_{3 \times 3} & \left[\left[{\delta q}^{-1} {Q_i}^{-1} Q_j\right]_L\right]_{br_{3 \times 3}} & 0_{3 \times 1} \\
0_{3 \times 3} & 0_{3 \times 3} & 0_{3 \times 1} \\
0_{3 \times 3} & 0_{3 \times 3} & 0_{3 \times 1} \\
0_{3 \times 3} & 0_{3 \times 3} & 0_{3 \times 1}
\end{bmatrix} J p o s e j = ⎣ ⎢ ⎢ ⎢ ⎢ ⎡ R i − 1 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 [ [ δ q − 1 Q i − 1 Q j ] L ] b r 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 1 0 3 × 1 0 3 × 1 0 3 × 1 0 3 × 1 ⎦ ⎥ ⎥ ⎥ ⎥ ⎤
求第 j j j 幀 ( j = i + 1 ) (j = i + 1) ( j = i + 1 ) 的 v b a b ω v \ \ b_{a} \ \ b_{\omega} v b a b ω 的雅各比矩陣
J s p e e d j = [ 0 0 0 0 0 0 R i 0 0 0 I 0 0 0 I ]
J_{speed_j} = \begin{bmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
R_i & 0 & 0 \\
0 & I & 0 \\
0 & 0 & I
\end{bmatrix}
J s p e e d j = ⎣ ⎢ ⎢ ⎢ ⎢ ⎡ 0 0 R i 0 0 0 0 0 I 0 0 0 0 0 I ⎦ ⎥ ⎥ ⎥ ⎥ ⎤
使用最小二乘並加入殘差矩陣,優化:
J T P − 1 J = J T P − 1 r e s i d u a l 15 × 1 J^TP^{-1}J = J^TP^{-1} \ \ residual_{15 \times 1} J T P − 1 J = J T P − 1 r e s i d u a l 1 5 × 1
由於ceres庫的限制,只能使用J , r e s i d u a l 15 × 1 J ,\ \ \ residual_{15 \times 1} J , r e s i d u a l 1 5 × 1 優化,爲了能夠使用協方差,因此,我們重新定義雅各比矩陣和殘差J ′ r e s i d u a l 15 × 1 ′ J' \ \ {residual_{15 \times 1}}' J ′ r e s i d u a l 1 5 × 1 ′
J ′ = P − 1 J , r e s i d u a l 15 × 1 ′ = P − 1 r e s i d u a l 15 × 1 J' = \sqrt{P^{-1}} J, \ \ {residual_{15 \times 1}}' = \sqrt{P^{-1}} \ \ residual_{15 \times 1} J ′ = P − 1 J , r e s i d u a l 1 5 × 1 ′ = P − 1 r e s i d u a l 1 5 × 1
這樣就可以通過最小二乘的方式就可以使用殘差:
J ′ T J ′ = J T P − 1 J , J ′ T r e s i d u a l 15 × 1 ′ = J T P − 1 r e s i d u a l 15 × 1 J'^T J' = J^T P^{-1} J, \ \ J'^T \ \ {residual_{15 \times 1}}' = J^T P^{-1} \ \ residual_{15 \times 1} J ′ T J ′ = J T P − 1 J , J ′ T r e s i d u a l 1 5 × 1 ′ = J T P − 1 r e s i d u a l 1 5 × 1
其中:
P − 1 = P j − 1 m a t r i x L T \sqrt{P^{-1}} = {{{P_j}^{-1}}_{matrixL}}^T P − 1 = P j − 1 m a t r i x L T 通過調用Eigen::LLT函數
特徵點重投影優化
已知條件有:
匹配點對(有像素座標作用了相機內參矩陣後的點座標) ( u , v ) \left(u, v\right) ( u , v ) 爲像素座標,經過相機內參作用得到座標( u − P X F O C U S , v − P Y F O C U S ) \left(\frac{u-PX}{FOCUS}, \frac{v-PY}{FOCUS}\right) ( F O C U S u − P X , F O C U S v − P Y ) (匹配點對不需要優化)
兩幀的位姿 (重投影誤差,需要優化)
IMU座標系到相機座標系的變換矩陣 (此參數固定,不會進行優化)
匹配點對的深度信息的倒數(重投影誤差,需要優化)
即:
{ p t s i = ( x i , y i , 1 ) , p t s j = ( x j , y j , 1 ) Q i , P i , Q j , P j Q i c , T i c D e p i
\begin{cases}
pts_i = \left(x_i, y_i, 1\right), \ \ pts_j = \left(x_j, y_j, 1 \right) \\
Q_i, P_i, Q_j, P_j \\
Q_{ic}, T_{ic} \\
Dep_i
\end{cases}
⎩ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎧ p t s i = ( x i , y i , 1 ) , p t s j = ( x j , y j , 1 ) Q i , P i , Q j , P j Q i c , T i c D e p i
根據 p t s i , Q i , P i , Q j , P j , D e p i pts_i, \ Q_i, \ P_i, \ Q_j, \ P_j, \ Dep_i p t s i , Q i , P i , Q j , P j , D e p i 求 p t s j ′ {pts_j}' p t s j ′
p t s j ′ = Q i c − 1 ( Q j − 1 ( Q i ( Q i c p t s i D e p i + T i c ) + P i − P j ) − T i c ) {pts_j}' ={Q_{ic}}^{-1}\left({Q_j}^{-1}\left(Q_i\left(Q_{ic}\frac{pts_i}{Dep_i} + T_{ic}\right)+P_i - P_j\right) - T_{ic}\right) p t s j ′ = Q i c − 1 ( Q j − 1 ( Q i ( Q i c D e p i p t s i + T i c ) + P i − P j ) − T i c )
令:
p t s _ c a m e r a i = p t s i D e p i pts\_camera_i = \frac{pts_i}{Dep_i} p t s _ c a m e r a i = D e p i p t s i
p t s i m u i = Q i c p t s _ c a m e r a i + T i c ptsimu_i = Q_{ic} \ \ pts\_camera_i + T_{ic} p t s i m u i = Q i c p t s _ c a m e r a i + T i c
p t s i m u j = Q j − 1 ( Q i p t s i m u i + P i − P j ) ptsimu_j = {Q_j}^{-1}\left(Q_i \ \ ptsimu_i +P_i - P_j\right) p t s i m u j = Q j − 1 ( Q i p t s i m u i + P i − P j )
然後把 p t s j ′ {pts_j}' p t s j ′ 歸一化到 z = 1 z = 1 z = 1 平面 p t s j ′ = p t s j ′ p t s j ′ . z {pts_j}' = \frac{{pts_j}'}{{pts_j}'.z} p t s j ′ = p t s j ′ . z p t s j ′
構建優化方程:
min ∣ ∣ p t s j x y ′ − p t s j x y ∣ ∣ 2 \min || {pts_j}'_{xy} - {pts_j}_{xy}||^2 min ∣ ∣ p t s j x y ′ − p t s j x y ∣ ∣ 2
殘差爲:
r e s i d u a l 2 × 1 = F O C U S 1.5 I 2 × 2 ∗ ( p t s j x y ′ − p t s j x y ) residual_{2 \times 1} = \frac{FOCUS}{1.5} I_{2 \times 2} * \left({pts_j}'_{xy} - {pts_j}_{xy}\right) r e s i d u a l 2 × 1 = 1 . 5 F O C U S I 2 × 2 ∗ ( p t s j x y ′ − p t s j x y )
注:F O C U S 1.5 I 2 × 2 \frac{FOCUS}{1.5} I_{2 \times 2} 1 . 5 F O C U S I 2 × 2 的作用是把殘差轉化到像素單位
根據最小二乘優化,需要求取雅各比矩陣:
第 i i i 幀位姿的雅各比矩陣
第 j j j 幀位姿的雅各比矩陣
Q i c , T i c Q_{ic}, T_{ic} Q i c , T i c 的雅各比矩陣(即使不優化也需要求取)
特徵點深度信息的雅各比矩陣
先求出 p t s j ′ = p t s j ′ p t s j ′ . z {pts_j}' = \frac{{pts_j}'}{{pts_j}'.z} p t s j ′ = p t s j ′ . z p t s j ′ 方程關於p t s j x , p t s j y {pts_j}_x, \ {pts_j}_y p t s j x , p t s j y 的導數:
r e d u c e 2 × 3 = F O C U S 1.5 I 2 × 2 ∗ [ 1 p t s j z 0 − p t s j x p t s j z ∗ p t s j z 0 1 p t s j z − p t s j y p t s j z ∗ p t s j z ] reduce_{2 \times 3} = \frac{FOCUS}{1.5} I_{2 \times 2} *
\begin{bmatrix}
\frac{1}{{pts_j}_z} & 0 & -\frac{{pts_j}_x}{{pts_j}_z * {pts_j}_z} \\
0 & \frac{1}{{pts_j}_z} & -\frac{{pts_j}_y}{{pts_j}_z * {pts_j}_z}
\end{bmatrix} r e d u c e 2 × 3 = 1 . 5 F O C U S I 2 × 2 ∗ [ p t s j z 1 0 0 p t s j z 1 − p t s j z ∗ p t s j z p t s j x − p t s j z ∗ p t s j z p t s j y ]
第 i i i 幀位姿的雅各比矩陣
j a c o i 2 × 7 = [ r e d u c e 2 × 3 ∗ [ R i c − 1 R j − 1 − R i c − 1 R j − 1 R i [ p t s i m u i ] × ] 0 2 × 1 ] {jaco_i}_{2 \times 7} =
\begin{bmatrix}
reduce_{2 \times 3} *
\begin{bmatrix}
{R_{ic}}^{-1}{R_j}^{-1} & -{R_{ic}}^{-1}{R_j}^{-1}R_i\left[ptsimu_i\right]_{\times}
\end{bmatrix} & 0_{2 \times 1}
\end{bmatrix} j a c o i 2 × 7 = [ r e d u c e 2 × 3 ∗ [ R i c − 1 R j − 1 − R i c − 1 R j − 1 R i [ p t s i m u i ] × ] 0 2 × 1 ]
第 j j j 幀位姿的雅各比矩陣
j a c o j 2 × 7 = [ r e d u c e 2 × 3 ∗ [ − R i c − 1 R j − 1 R i c − 1 [ p t s i m u j ] × ] 0 2 × 1 ] {jaco_j}_{2 \times 7} =
\begin{bmatrix}
reduce_{2 \times 3} *
\begin{bmatrix}
-{R_{ic}}^{-1}{R_j}^{-1} & {R_{ic}}^{-1}\left[ptsimu_j\right]_{\times}
\end{bmatrix} & 0_{2 \times 1}
\end{bmatrix} j a c o j 2 × 7 = [ r e d u c e 2 × 3 ∗ [ − R i c − 1 R j − 1 R i c − 1 [ p t s i m u j ] × ] 0 2 × 1 ]
Q i c , T i c Q_{ic}, T_{ic} Q i c , T i c 的雅各比矩陣
令:
t m p r = R i c − 1 R j − 1 R i R i c tmp_r = {R_{ic}}^{-1} {R_j}^{-1}R_i R_{ic} t m p r = R i c − 1 R j − 1 R i R i c
j a c o i c 2 × 7 = [ r e d u c e 2 × 3 ∗ [ R i c − 1 ( R j − 1 R i − I 3 × 3 ) − t m p r [ p t s _ c a m e r a i ] × + [ t m p r ∗ p t s _ c a m e r a i ] × + [ R i c − 1 ( R j − 1 ( R i T i c + P i − P j ) − T i c ) ] × ] 0 2 × 1 ] {jaco_{ic}}_{2 \times 7} =
\begin{bmatrix}
reduce_{2 \times 3} *
\begin{bmatrix}
{R_{ic}}^{-1} \left({R_j}^{-1} R_i - I_{3 \times 3}\right) & -tmp_r \ \left[pts\_camera_i\right]_{\times} + \left[tmp_r * pts\_camera_i \right]_{\times} + \left[R_{ic}^{-1}\left(R_j^{-1}\left(R_i T_{ic} + P_i - P_j \right) - T_{ic}\right)\right]_{\times}
\end{bmatrix} & 0_{2 \times 1}
\end{bmatrix}
j a c o i c 2 × 7 = [ r e d u c e 2 × 3 ∗ [ R i c − 1 ( R j − 1 R i − I 3 × 3 ) − t m p r [ p t s _ c a m e r a i ] × + [ t m p r ∗ p t s _ c a m e r a i ] × + [ R i c − 1 ( R j − 1 ( R i T i c + P i − P j ) − T i c ) ] × ] 0 2 × 1 ]
特徵點深度信息的雅各比矩陣
j a c o f e a t u r e 2 × 1 = − r e d u c e 2 × 3 R i c − 1 R j − 1 R i R i c p t s i D e p i 2 {jaco_{feature}}_{2 \times 1} = -\frac{reduce_{2 \times 3} {R_{ic}}^{-1}R_j^{-1}R_iR_{ic} \ pts_i}{Dep_i^2} j a c o f e a t u r e 2 × 1 = − D e p i 2 r e d u c e 2 × 3 R i c − 1 R j − 1 R i R i c p t s i
根據以上求得的殘差矩陣和雅各比矩陣,就可以通過ceres 庫對位姿和特徵點信息進行最小二乘的優化,在優化過程中有使用損失函數ceres::CauchyLoss(1.0),該函數的是
ρ ( x ) = { log ( x + 1 ) 1 x + 1 − 1 ( x + 1 ) 2
\rho\left(x\right) =
\begin{cases}
\log \left(x + 1\right) \\
\frac{1}{x + 1} \\
-\frac{1}{\left(x + 1\right)^2}
\end{cases}
ρ ( x ) = ⎩ ⎪ ⎨ ⎪ ⎧ log ( x + 1 ) x + 1 1 − ( x + 1 ) 2 1
即,log ( x + 1 ) , log ( x + 1 ) ′ , log ( x + 1 ) ′ ′ \log\left(x + 1\right), \ \log\left(x + 1\right)', \ \log\left(x + 1\right)'' log ( x + 1 ) , log ( x + 1 ) ′ , log ( x + 1 ) ′ ′ 原型,一階導,二階導。
最終的優化函數爲min ρ ( ∣ ∣ p t s j x y ′ − p t s j x y ∣ ∣ 2 ) \min \rho \left(|| {pts_j}'_{xy} - {pts_j}_{xy}||^2\right) min ρ ( ∣ ∣ p t s j x y ′ − p t s j x y ∣ ∣ 2 )
邊緣化優化
邊緣化優化中的重投影優化
相比單純的重投影優化,添加了係數。
已知重投影優化計算出了 r e s i d u a l , J residual, \ \ J r e s i d u a l , J 在邊緣化優化中的重投影優化中s q _ n o r m = ∣ ∣ p t s j x y ′ − p t s j x y ∣ ∣ 2 2 , r e s i d u a l = r e s i d u a l ∗ 1 s q _ n o r m + 1 , J = 1 s q _ n o r m + 1 ∗ J sq\_norm = || {pts_j}'_{xy} - {pts_j}_{xy}||^2_2, \ \ residual = residual * \frac{1}{\sqrt{sq\_norm }+ 1}, \ \ J = \frac{1}{\sqrt{sq\_norm }+ 1} * J s q _ n o r m = ∣ ∣ p t s j x y ′ − p t s j x y ∣ ∣ 2 2 , r e s i d u a l = r e s i d u a l ∗ s q _ n o r m + 1 1 , J = s q _ n o r m + 1 1 ∗ J
此操作主要是爲了當重投影誤差特別大時,我們認爲該點不準確,使其權重減小。能夠一定程度上減小錯誤點對優化的影響。
邊緣化優化時,添加的優化信息有:
1. 劃窗內的第0幀與第1幀之間的IMU優化信息
2. 劃窗內與第0幀有關的特徵點重投影優化信息
邊緣化掉劃窗內第0幀相關的信息,需要用到舒爾補,用舒爾補之前我們需要先把包含第0幀相關信息的矩陣構建出來,由於優化使用的最小二乘,因此構建的矩陣如下:
A = J T J , b = J T r A = J^T J, \ \ b = J^T r A = J T J , b = J T r , r r r 表示的殘差矩陣。A X = b AX = b A X = b
代碼如下:
void* ThreadsConstructA(void* threadsstruct)
{
ThreadsStruct* p = ((ThreadsStruct*)threadsstruct);
for (auto it : p->sub_factors)
{
for (int i = 0; i < static_cast<int>(it->parameter_blocks.size()); i++)
{
int idx_i = p->parameter_block_idx[reinterpret_cast<long>(it->parameter_blocks[i])];
int size_i = p->parameter_block_size[reinterpret_cast<long>(it->parameter_blocks[i])];
if (size_i == 7)
size_i = 6;
Eigen::MatrixXd jacobian_i = it->jacobians[i].leftCols(size_i);
for (int j = i; j < static_cast<int>(it->parameter_blocks.size()); j++)
{
int idx_j = p->parameter_block_idx[reinterpret_cast<long>(it->parameter_blocks[j])];
int size_j = p->parameter_block_size[reinterpret_cast<long>(it->parameter_blocks[j])];
if (size_j == 7)
size_j = 6;
Eigen::MatrixXd jacobian_j = it->jacobians[j].leftCols(size_j);
if (i == j)
p->A.block(idx_i, idx_j, size_i, size_j) += jacobian_i.transpose() * jacobian_j;
else
{
p->A.block(idx_i, idx_j, size_i, size_j) += jacobian_i.transpose() * jacobian_j;
p->A.block(idx_j, idx_i, size_j, size_i) = p->A.block(idx_i, idx_j, size_i, size_j).transpose();
}
}
p->b.segment(idx_i, size_i) += jacobian_i.transpose() * it->residuals;
}
}
return threadsstruct;
}
以上信息可以構成一個矩陣,我們分爲4部分:
A A A 矩陣:
A = [ A m m A m r A r m A r r ] A = \begin{bmatrix}
A_{mm} & A_{mr} \\
A_{rm} & A_{rr}
\end{bmatrix} A = [ A m m A r m A m r A r r ]
其中A m m A_{mm} A m m 中包含 劃窗內第0幀位姿,speed 和特徵點相關信息。
A r r A_{rr} A r r 中包含劃窗內第1幀的speed相關信息和第1幀到第10幀的位姿相關信息,同時包含IMU到Camera座標系的轉換A n g l e i c , T i c Angle_{ic}, T_{ic} A n g l e i c , T i c 信息。
通過舒爾補重新構建矩陣A A A :
A = A r r − A r m ∗ A m m i n v ∗ A m r A = A_{rr} - A_{rm} * {A_{mm}}_{inv} * A_{mr} A = A r r − A r m ∗ A m m i n v ∗ A m r
代碼:
Eigen::MatrixXd Amm = 0.5 * (A.block(0, 0, m, m) + A.block(0, 0, m, m).transpose());
Eigen::SelfAdjointEigenSolver<Eigen::MatrixXd> saes(Amm);
Eigen::MatrixXd Amm_inv = saes.eigenvectors() * Eigen::VectorXd((saes.eigenvalues().array() > eps).select(saes.eigenvalues().array().inverse(), 0)).asDiagonal() * saes.eigenvectors().transpose();
// 此部分爲舒爾補 邊緣化優化狀態量
Eigen::MatrixXd Amr = A.block(0, m, m, n);
Eigen::MatrixXd Arm = A.block(m, 0, n, m);
Eigen::MatrixXd Arr = A.block(m, m, n, n);
A = Arr - Arm * Amm_inv * Amr;
b b b 矩陣:
b = [ b m m b r r ] b = \begin{bmatrix}
b_{mm} \\
b_{rr}
\end{bmatrix} b = [ b m m b r r ]
通過舒爾補重新構建矩陣b b b :
b = b r r − A r m ∗ A m m i n v ∗ b m m b=b_{rr} - A_{rm} * {A_{mm}}_{inv} * b_{mm} b = b r r − A r m ∗ A m m i n v ∗ b m m
代碼:
Eigen::VectorXd bmm = b.segment(0, m);
Eigen::VectorXd brr = b.segment(m, n);
b = brr - Arm * Amm_inv * bmm;
這樣我們就保留了關於第0幀的約束到了通過舒爾補重新構建的矩陣A , b A, \ \ b A , b 中。
因爲現在的矩陣A , b A, \ \ b A , b 爲 A = J T J , b = J T r A = J^T J, \ \ b = J^T r A = J T J , b = J T r 這樣的形式,又因爲ceres庫只能使用最小二乘優化,因此我們需要根據 A A A 重新分解出 J J J 形式的矩陣。即類似 l i n e a r i z e d _ j a c o b i a n s = A , , l i n e a r i z e d _ r e s i d u a l s = 1 A b linearized\_jacobians = \sqrt{A}, , linearized\_residuals = \frac{1}{\sqrt{A}} b l i n e a r i z e d _ j a c o b i a n s = A , , l i n e a r i z e d _ r e s i d u a l s = A 1 b ,l i n e a r i z e d _ j a c o b i a n s , l i n e a r i z e d _ r e s i d u a l s linearized\_jacobians, \ linearized\_residuals l i n e a r i z e d _ j a c o b i a n s , l i n e a r i z e d _ r e s i d u a l s 爲優化時使用的矩陣。
現在l i n e a r i z e d _ j a c o b i a n s , l i n e a r i z e d _ r e s i d u a l s linearized\_jacobians, \ linearized\_residuals l i n e a r i z e d _ j a c o b i a n s , l i n e a r i z e d _ r e s i d u a l s 包含的信息爲l i n e a r i z e d _ j a c o b i a n s [ 1 − 10 ] ∣ 0 , l i n e a r i z e d _ r e s i d u a l s [ 1 − 10 ] ∣ 0 linearized\_jacobians_{[1-10] | 0}, linearized\_residuals_{[1-10] | 0} l i n e a r i z e d _ j a c o b i a n s [ 1 − 1 0 ] ∣ 0 , l i n e a r i z e d _ r e s i d u a l s [ 1 − 1 0 ] ∣ 0
代碼:
Eigen::SelfAdjointEigenSolver<Eigen::MatrixXd> saes2(A);
Eigen::VectorXd S = Eigen::VectorXd((saes2.eigenvalues().array() > eps).select(saes2.eigenvalues().array(), 0));
Eigen::VectorXd S_inv = Eigen::VectorXd((saes2.eigenvalues().array() > eps).select(saes2.eigenvalues().array().inverse(), 0));
Eigen::VectorXd S_sqrt = S.cwiseSqrt();
Eigen::VectorXd S_inv_sqrt = S_inv.cwiseSqrt();
linearized_jacobians = S_sqrt.asDiagonal() * saes2.eigenvectors().transpose();
linearized_residuals = S_inv_sqrt.asDiagonal() * saes2.eigenvectors().transpose() * b;
以上是構建邊緣化優化需要的信息。構建完以上信息,劃窗內把第0幀信息移除掉,然後進行優化,邊緣化優化的信息爲:
第1幀到第10幀的位姿。即 P o s e R T [ 1 − 10 ] {Pose_{RT}}_{[1-10]} P o s e R T [ 1 − 1 0 ]
已知條件:
keep_Pose 第1幀到第10幀的位姿和第1幀的speed (速度v , 陀螺儀偏置bgs , 加速度偏置 bas)信息 。(該信息不進行優化)
劃窗內的位姿Pose 第1幀到第10幀的位姿和第1幀的speed。(該變量需要進行優化)
優化的方式使用的最小二乘優化,雅各比矩陣和殘差矩陣使用l i n e a r i z e d _ j a c o b i a n s , l i n e a r i z e d _ r e s i d u a l s linearized\_jacobians, \ linearized\_residuals l i n e a r i z e d _ j a c o b i a n s , l i n e a r i z e d _ r e s i d u a l s
優化過程中更新雅各比矩陣和殘差矩陣的方法爲:
δ x = P o s e − k e e p _ P o s e \delta x = Pose - keep\_Pose δ x = P o s e − k e e p _ P o s e
l i n e a r i z e d _ r e s i d u a l s = l i n e a r i z e d _ r e s i d u a l s + l i n e a r i z e d _ j a c o b i a n s ∗ δ x linearized\_residuals = linearized\_residuals + linearized\_jacobians * \delta x l i n e a r i z e d _ r e s i d u a l s = l i n e a r i z e d _ r e s i d u a l s + l i n e a r i z e d _ j a c o b i a n s ∗ δ x
l i n e a r i z e d _ j a c o b i a n s = l i n e a r i z e d _ j a c o b i a n s linearized\_jacobians = linearized\_jacobians l i n e a r i z e d _ j a c o b i a n s = l i n e a r i z e d _ j a c o b i a n s
代碼:
bool MarginalizationFactor::Evaluate(double const *const *parameters, double *residuals, double **jacobians) const
{
int n = marginalization_info->n;
int m = marginalization_info->m;
Eigen::VectorXd dx(n);
for (int i = 0; i < static_cast<int>(marginalization_info->keep_block_size.size()); i++)
{
int size = marginalization_info->keep_block_size[i];
int idx = marginalization_info->keep_block_idx[i] - m;
Eigen::VectorXd x = Eigen::Map<const Eigen::VectorXd>(parameters[i], size);
Eigen::VectorXd x0 = Eigen::Map<const Eigen::VectorXd>(marginalization_info->keep_block_data[i], size);
if (size != 7)
dx.segment(idx, size) = x - x0;
else
{
// \delta t
dx.segment<3>(idx + 0) = x.head<3>() - x0.head<3>();
// \delta \theta
dx.segment<3>(idx + 3) = 2.0 * Utility::positify(Eigen::Quaterniond(x0(6), x0(3), x0(4), x0(5)).inverse() * Eigen::Quaterniond(x(6), x(3), x(4), x(5))).vec();
if (!((Eigen::Quaterniond(x0(6), x0(3), x0(4), x0(5)).inverse() * Eigen::Quaterniond(x(6), x(3), x(4), x(5))).w() >= 0))
{
dx.segment<3>(idx + 3) = 2.0 * -Utility::positify(Eigen::Quaterniond(x0(6), x0(3), x0(4), x0(5)).inverse() * Eigen::Quaterniond(x(6), x(3), x(4), x(5))).vec();
}
}
}
Eigen::Map<Eigen::VectorXd>(residuals, n) = marginalization_info->linearized_residuals + marginalization_info->linearized_jacobians * dx;
if (jacobians)
{
for (int i = 0; i < static_cast<int>(marginalization_info->keep_block_size.size()); i++)
{
if (jacobians[i])
{
int size = marginalization_info->keep_block_size[i], local_size = marginalization_info->localSize(size);
int idx = marginalization_info->keep_block_idx[i] - m;
Eigen::Map<Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>> jacobian(jacobians[i], n, size);
jacobian.setZero();
jacobian.leftCols(local_size) = marginalization_info->linearized_jacobians.middleCols(idx, local_size);
}
}
}
return true;
}
添加新的一幀數據 後,按照同樣的方式構建l i n e a r i z e d _ j a c o b i a n s , l i n e a r i z e d _ r e s i d u a l s linearized\_jacobians, \ linearized\_residuals l i n e a r i z e d _ j a c o b i a n s , l i n e a r i z e d _ r e s i d u a l s
但是keep_Pose有原來的 k e e p _ P o s e [ 2 − 11 ] = k e e p _ P o s e [ 2 − 10 ] + k e e p _ P o s e [ 11 ] keep\_Pose_{[2-11]} = keep\_Pose_{[2-10]} + keep\_Pose_{[11]} k e e p _ P o s e [ 2 − 1 1 ] = k e e p _ P o s e [ 2 − 1 0 ] + k e e p _ P o s e [ 1 1 ] 即,前9個位姿使用的還是上一次構建邊緣化優化信息,然後再添加新的一幀的位姿。
Pose則是使用的優化過後的劃窗內的位姿。
流程如下圖:
迴環檢測優化
已經在當前劃窗內通過詞袋找到與之前關鍵幀迴環的關鍵幀分別記爲cur_KeyFrame, old_KeyFrame。
記錄相關信息 。包括:
old_KeyFrame的位姿,的關鍵幀序號
匹配點對
cur_KeyFrame的關鍵幀序號,時間戳,並標記爲迴環
優化
通過重投影,計算cur_KeyFrame位姿移動到old_KeyFrame位姿的變化量
因爲光流跟蹤的原因,可以得到劃窗內與cur_KeyFrame匹配的點對,然後把cur_KeyFrame位姿當做old_keyFrame位姿,把cur_KeyFrame匹配點的座標用old_KeyFrame對應點代替,通過重投影計算Δ R Δ T \Delta R \ \ \Delta T Δ R Δ T ,即:w i n d o w _ F r a m e s P o s e , P o i n t s w i n d o w _ F r a m e s , l o o p _ P o s e = c u r _ K e y F r a m e p o s e , P o i n t s o l d _ K e y F r a m e {window\_Frames}_{Pose}, \ \ Points_{window\_Frames}, \ \ loop\_Pose = {cur\_KeyFrame}_{pose}, \ \ Points_{old\_KeyFrame} w i n d o w _ F r a m e s P o s e , P o i n t s w i n d o w _ F r a m e s , l o o p _ P o s e = c u r _ K e y F r a m e p o s e , P o i n t s o l d _ K e y F r a m e
其中w i n d o w _ F r a m e s P o s e {window\_Frames}_{Pose} w i n d o w _ F r a m e s P o s e 也包含cur幀的位姿,優化的意義不同。l o o p _ P o s e 、 w i n d o w _ F r a m e s P o s e 、 匹 配 點 的 深 度 信 息 loop\_Pose、{window\_Frames}_{Pose}、匹配點的深度信息 l o o p _ P o s e 、 w i n d o w _ F r a m e s P o s e 、 匹 配 點 的 深 度 信 息 爲優化對象。
根據優化後的 l o o p _ P o s e ′ {loop\_Pose}' l o o p _ P o s e ′ 和 w i n d o w _ F r a m e s P o s e c u r ′ {{{window\_Frames}_{Pose}}_{cur}}' w i n d o w _ F r a m e s P o s e c u r ′ 計算出r e l a t i v e _ t , r e l a t i v e _ q , r e l a t i v e _ y a w relative\_t, \ \ relative\_q, \ \ relative\_yaw r e l a t i v e _ t , r e l a t i v e _ q , r e l a t i v e _ y a w ,此3個參數會用在四自由度的全局優化中 。
{ r e l a t i v e _ t = R s _ l o o p − 1 ( P s c u r − P s _ l o o p ) r e l a t i v e _ q = R s _ l o o p − 1 R s c u r r e l a t i v e _ y a w = R s c u r . A n g l e . z − R s _ l o o p . A n g l e . z \begin{cases}
relative\_t = {Rs\_loop}^{-1} \left(Ps_{cur} - Ps\_loop \right) \\
relative\_q = {Rs\_loop}^{-1} Rs_{cur} \\
relative\_yaw = Rs_{cur}.Angle.z - Rs\_loop.Angle.z
\end{cases} ⎩ ⎪ ⎨ ⎪ ⎧ r e l a t i v e _ t = R s _ l o o p − 1 ( P s c u r − P s _ l o o p ) r e l a t i v e _ q = R s _ l o o p − 1 R s c u r r e l a t i v e _ y a w = R s c u r . A n g l e . z − R s _ l o o p . A n g l e . z
其中R s _ l o o p P s _ l o o p Rs\_loop \ \ Ps\_loop R s _ l o o p P s _ l o o p 由 l o o p _ P o s e ′ {loop\_Pose}' l o o p _ P o s e ′ 得到
R s c u r P s c u r Rs_{cur} \ \ Ps_{cur} R s c u r P s c u r 由w i n d o w _ F r a m e s P o s e c u r ′ {{{window\_Frames}_{Pose}}_{cur}}' w i n d o w _ F r a m e s P o s e c u r ′ 得到
代碼:
for (int i = 0; i< WINDOW_SIZE; i++)
{
if (front_pose.header == Headers[i])
{
Matrix3d Rs_i = Quaterniond(para_Pose[i][6], para_Pose[i][3], para_Pose[i][4], para_Pose[i][5]).normalized().toRotationMatrix();
Vector3d Ps_i = Vector3d(para_Pose[i][0], para_Pose[i][1], para_Pose[i][2]);
Matrix3d Rs_loop = Quaterniond(front_pose.loop_pose[6], front_pose.loop_pose[3], front_pose.loop_pose[4], front_pose.loop_pose[5]).normalized().toRotationMatrix();
Vector3d Ps_loop = Vector3d(front_pose.loop_pose[0], front_pose.loop_pose[1], front_pose.loop_pose[2]);
// relative_t relative_q = delta位姿
// relative_yaw z軸轉動的角度
front_pose.relative_t = Rs_loop.transpose() * (Ps_i - Ps_loop);
front_pose.relative_q = Rs_loop.transpose() * Rs_i;
front_pose.relative_yaw = Utility::normalizeAngle(Utility::R2ypr(Rs_i).x() - Utility::R2ypr(Rs_loop).x());
}
}
優化後位姿的更新
已知條件
優化之前的劃窗內的位姿 R s P s S p e e d B i a s Rs \ \ Ps \ \ SpeedBias R s P s S p e e d B i a s
優化後的位姿 p a r a _ P o s e Q p a r a _ P o s e T p a r a _ S p e e d B i a s v , b a s , b g s {para\_Pose}_Q \ \ {para\_Pose}_T \ \ {para\_SpeedBias}_{v, bas, bgs} p a r a _ P o s e Q p a r a _ P o s e T p a r a _ S p e e d B i a s v , b a s , b g s
迴環優化後的位姿 R s _ l o o p P s _ l o o p Rs\_loop \ \ Ps\_loop R s _ l o o p P s _ l o o p
迴環old_KeyFrame的位姿 Q o l d P o l d Q_{old} \ \ P_{old} Q o l d P o l d
以上的四中優化會同時進行,因此會把約束條件羅列到一起,我們知道現在需要優化的變量有:
p a r a _ P o s e Q , p a r a _ P o s e T , p a r a _ S p e e d B i a s , R s _ l o o p , P s _ l o o p , D e p {para\_Pose}_Q, \ \ {para\_Pose}_T, \ \ {para\_SpeedBias}, \ \ Rs\_loop, \ \ Ps\_loop, \ \ Dep p a r a _ P o s e Q , p a r a _ P o s e T , p a r a _ S p e e d B i a s , R s _ l o o p , P s _ l o o p , D e p 分別爲劃窗內的位姿,劃窗內的速度、加速度偏置、角速度偏置,迴環位姿,特徵點深度信息。
優化的內容爲:
[ J P o s e [ 0 − 10 ] J S p e e d B i a s [ 0 − 10 ] J D e p [ 0 − 10 ] J M a r g i n a l i z a t i o n [ 1 − 10 ] J L o o p [ 0 − 10 ] ] [ p a r a _ P o s e Q p a r a _ P o s e T p a r a _ S p e e d B i a s R s _ l o o p P s _ l o o p D e p ] = [ r P o s e [ 0 − 10 ] r S p e e d B i a s [ 0 − 10 ] r D e p [ 0 − 10 ] r M a r g i n a l i z a t i o n [ 1 − 10 ] r L o o p [ 0 − 10 ] ] \begin{bmatrix}
J_{Pose[0 - 10]} \\
J_{SpeedBias[0-10]} \\
J_{Dep[0 - 10]} \\
J_{Marginalization[1 - 10]} \\
J_{Loop[0-10]}
\end{bmatrix}
\begin{bmatrix}
{para\_Pose}_Q \\
{para\_Pose}_T \\
{para\_SpeedBias} \\
Rs\_loop \\
Ps\_loop \\
Dep
\end{bmatrix} =
\begin{bmatrix}
r_{Pose[0 - 10]} \\
r_{SpeedBias[0 - 10]} \\
r_{Dep[0 - 10]}
r_{Marginalization[1 - 10]} \\
r_{Loop[0 - 10]}
\end{bmatrix} ⎣ ⎢ ⎢ ⎢ ⎢ ⎡ J P o s e [ 0 − 1 0 ] J S p e e d B i a s [ 0 − 1 0 ] J D e p [ 0 − 1 0 ] J M a r g i n a l i z a t i o n [ 1 − 1 0 ] J L o o p [ 0 − 1 0 ] ⎦ ⎥ ⎥ ⎥ ⎥ ⎤ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎡ p a r a _ P o s e Q p a r a _ P o s e T p a r a _ S p e e d B i a s R s _ l o o p P s _ l o o p D e p ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎤ = ⎣ ⎢ ⎢ ⎡ r P o s e [ 0 − 1 0 ] r S p e e d B i a s [ 0 − 1 0 ] r D e p [ 0 − 1 0 ] r M a r g i n a l i z a t i o n [ 1 − 1 0 ] r L o o p [ 0 − 1 0 ] ⎦ ⎥ ⎥ ⎤
上式公式,矩陣大小默認自動對齊。這樣邊緣化優化就會體現出。邊緣化優化經過舒爾補的雅各比矩陣就會和其它優化計算出的雅各比矩陣一起約束優化。上式同樣是使用最小二乘優化,最終展現是:
∑ J T J X = J T r \sum {J^TJ} \ \ X = J^T r ∑ J T J X = J T r
本來優化完的位姿,可以直接使用,考慮到後續的四自由度的全局優化(z軸的轉動和位姿),在此我們把z軸的轉動消除,消除z軸的轉動需要保持消除的值保持一致,因此我們以劃窗內的第0幀爲標準來消除。
Δ z A n g l e = R s [ 0 ] . A n g l e . z − p a r a _ P o s e Q . A n g l e . z \Delta zAngle = Rs_{[0]}.Angle.z - {para\_Pose}_Q.Angle.z Δ z A n g l e = R s [ 0 ] . A n g l e . z − p a r a _ P o s e Q . A n g l e . z
計算第0幀z軸的轉動量,用3x3矩陣表示:
r o t _ d i f f = ( Δ z A n g l e , 0 , 0 ) . M a t r i x 3 x 3 rot\_diff = \left(\Delta zAngle, 0, 0\right).Matrix_{3x3} r o t _ d i f f = ( Δ z A n g l e , 0 , 0 ) . M a t r i x 3 x 3
o r i g i n _ P 0 = P s [ 0 ] origin\_P0 = Ps[0] o r i g i n _ P 0 = P s [ 0 ]
對優化後的位姿以第0幀爲標準消除z軸的轉動:
R s = r o t _ d i f f ∗ p a r a _ P o s e R Rs = rot\_diff * {para\_Pose}_R R s = r o t _ d i f f ∗ p a r a _ P o s e R
P s = r o t _ d i f f ∗ ( p a r a _ P o s e T − p a r a _ P o s e [ 0 ] T ) + o r i g i n _ P 0 Ps = rot\_diff * \left({para\_Pose}_T - para\_Pose[0]_T\right) + origin\_P0 P s = r o t _ d i f f ∗ ( p a r a _ P o s e T − p a r a _ P o s e [ 0 ] T ) + o r i g i n _ P 0
S p e e d B i a s = r o t _ d i f f ∗ p a r a _ S p e e d B i a s SpeedBias = rot\_diff * para\_SpeedBias S p e e d B i a s = r o t _ d i f f ∗ p a r a _ S p e e d B i a s
同理迴環也需要如此做:
R s _ l o o p = r o t _ d i f f ∗ R s _ l o o p Rs\_loop = rot\_diff * Rs\_loop R s _ l o o p = r o t _ d i f f ∗ R s _ l o o p
P s _ l o o p = r o t _ d i f f ∗ ( P s _ l o o p − p a r a _ P o s e [ 0 ] T ) + o r i g i n _ P 0 Ps\_loop = rot\_diff * \left(Ps\_loop - para\_Pose[0]_T\right) + origin\_P0 P s _ l o o p = r o t _ d i f f ∗ ( P s _ l o o p − p a r a _ P o s e [ 0 ] T ) + o r i g i n _ P 0
現在需要根據迴環信息計算優化後的位姿(只是劃窗內優化)轉到迴環 old_KeyFrame 的漂移:
先求z軸的漂移:
d r i f t _ y a w = Q o l d . A n g l e . x − R s _ l o o p . A n g l e . x drift\_yaw = Q_{old}.Angle.x - Rs\_loop.Angle.x d r i f t _ y a w = Q o l d . A n g l e . x − R s _ l o o p . A n g l e . x
轉換爲旋轉矩陣:
r _ d r i f t = ( d r i f t _ y a w , 0 , 0 ) . M a t r i x 3 x 3 r\_drift = \left(drift\_yaw, 0, 0\right).Matrix_{3 x 3} r _ d r i f t = ( d r i f t _ y a w , 0 , 0 ) . M a t r i x 3 x 3
位移的漂移:
t _ d r i f t = P o l d − r _ d r i f t ∗ P s _ l o o p t\_drift = P_{old} - r\_drift * Ps\_loop t _ d r i f t = P o l d − r _ d r i f t ∗ P s _ l o o p
r _ d r i f t t _ d r i f t r\_drift \ \ t\_drift r _ d r i f t t _ d r i f t 爲漂移量
深度信息直接使用優化後的深度信息。
代碼:
void VINS::new2old()
{
Vector3d origin_R0 = Utility::R2ypr(Rs[0]);
Vector3d origin_P0 = Ps[0];
if (failure_occur)
{
origin_R0 = Utility::R2ypr(last_R_old);
origin_P0 = last_P_old;
}
// 優化後的第0幀的相機位姿
Vector3d origin_R00 = Utility::R2ypr(Quaterniond(para_Pose[0][6],
para_Pose[0][3],
para_Pose[0][4],
para_Pose[0][5]).toRotationMatrix());
// z軸轉動的角度
double y_diff = origin_R0.x() - origin_R00.x();
//TODO
Matrix3d rot_diff = Utility::ypr2R(Vector3d(y_diff, 0, 0));
for (int i = 0; i <= WINDOW_SIZE; i++)
{
Rs[i] = rot_diff * Quaterniond(para_Pose[i][6], para_Pose[i][3], para_Pose[i][4], para_Pose[i][5]).normalized().toRotationMatrix();
Ps[i] = rot_diff * Vector3d(para_Pose[i][0] - para_Pose[0][0],
para_Pose[i][1] - para_Pose[0][1],
para_Pose[i][2] - para_Pose[0][2]) + origin_P0;
Vs[i] = rot_diff * Vector3d(para_SpeedBias[i][0],
para_SpeedBias[i][1],
para_SpeedBias[i][2]);
Bas[i] = Vector3d(para_SpeedBias[i][3],
para_SpeedBias[i][4],
para_SpeedBias[i][5]);
Bgs[i] = Vector3d(para_SpeedBias[i][6],
para_SpeedBias[i][7],
para_SpeedBias[i][8]);
}
Vector3d cur_P0 = Ps[0];
if (LOOP_CLOSURE && loop_enable)
{
loop_enable = false;
for (int i = 0; i< WINDOW_SIZE; i++)
{
if (front_pose.header == Headers[i])
{
Matrix3d Rs_loop = Quaterniond(front_pose.loop_pose[6], front_pose.loop_pose[3], front_pose.loop_pose[4], front_pose.loop_pose[5]).normalized().toRotationMatrix();
Vector3d Ps_loop = Vector3d(front_pose.loop_pose[0], front_pose.loop_pose[1], front_pose.loop_pose[2]);
Rs_loop = rot_diff * Rs_loop;
Ps_loop = rot_diff * (Ps_loop - Vector3d(para_Pose[0][0], para_Pose[0][1], para_Pose[0][2])) + origin_P0;
double drift_yaw = Utility::R2ypr(front_pose.Q_old.toRotationMatrix()).x() - Utility::R2ypr(Rs_loop).x();
r_drift = Utility::ypr2R(Vector3d(drift_yaw, 0, 0));
//r_drift = front_pose.Q_old.toRotationMatrix() * Rs_loop.transpose();
t_drift = front_pose.P_old - r_drift * Ps_loop;
}
}
}
for (int i = 0; i < NUM_OF_CAM; i++)
{
tic = Vector3d(para_Ex_Pose[i][0],
para_Ex_Pose[i][1],
para_Ex_Pose[i][2]);
ric = Quaterniond(para_Ex_Pose[i][6],
para_Ex_Pose[i][3],
para_Ex_Pose[i][4],
para_Ex_Pose[i][5]).toRotationMatrix();
}
VectorXd dep = f_manager.getDepthVector();
for (int i = 0; i < f_manager.getFeatureCount(); i++)
{
dep(i) = para_Feature[i][0];
}
f_manager.setDepth(dep);
}
##四自由度優化
四自由度優化必須在有迴環的條件下進行
該優化不會使用到特徵點信息,只用到關鍵幀的位姿信息
該優化的殘差主要是由迴環信息計算出
該優化使用的關鍵幀位姿爲IMU積分通過劃窗優化得到的位姿
該優化最終會重新給出一個漂移量 r _ d r i f t t _ d r i f t r\_drift \ \ t\_drift r _ d r i f t t _ d r i f t
該優化只會優化迴環內關鍵幀的位姿
old_KeyFrame的位姿固定
推理公式
該幀沒有迴環:
已知關鍵幀的位姿 R i , T i R_i, \ \ T_i R i , T i 和與該關鍵幀相差5幀以內, 並在該關鍵幀之前 的某一關鍵幀位姿 R i − j , T i − j R_{i-j}, \ \ T_{i - j} R i − j , T i − j
C o n s A n g l e i − j = A n g l e i − j ConsAngle_{i - j} = Angle_{i - j} C o n s A n g l e i − j = A n g l e i − j
Δ T = R i − j − 1 ( T i − T i − j ) \Delta T = {R_{i - j}}^{-1}\left(T_{i} - T_{i-j}\right) Δ T = R i − j − 1 ( T i − T i − j )
Δ z A n g l e = A n g l e i . z − A n g l e i − j . z \Delta zAngle = Angle_{i}.z - Angle_{i - j}.z Δ z A n g l e = A n g l e i . z − A n g l e i − j . z
優化迭代過程: 默認只有z軸轉動
R i − j w = ( A n g l e i − j . z , C o n s A n g l e i − j . y , C o n s A n g l e i − j . z ) R_{i - j}^w = \left(Angle_{i - j}.z, ConsAngle_{i - j}.y, ConsAngle_{i - j}.z\right) R i − j w = ( A n g l e i − j . z , C o n s A n g l e i − j . y , C o n s A n g l e i − j . z )
Δ T ′ = R i − j w − 1 ( T i − j − T i ) {\Delta T}' = {R_{i - j}^w}^{-1}\left(T_{i - j} - T_{i}\right) Δ T ′ = R i − j w − 1 ( T i − j − T i )
最終計算出的殘差爲:
r e s i d u a l s 4 × 1 = [ Δ T ′ − Δ T A n g l e i . z − A n g l e i − j . z − Δ z A n g l e ] residuals_{4 \times 1} = \begin{bmatrix}
{\Delta T}' - \Delta T \\
Angle_{i}.z - Angle_{i - j}.z - \Delta zAngle
\end{bmatrix} r e s i d u a l s 4 × 1 = [ Δ T ′ − Δ T A n g l e i . z − A n g l e i − j . z − Δ z A n g l e ]
雅各比矩陣使用的自動求導方式獲得,ceres有自動求導的函數。
該過程中用到損失函數ceres::HuberLoss(1.0)
ρ ( s ) = { s s < = 1 2 s − 1 s > = 1 \rho\left(s\right) = \begin{cases}
s \qquad \qquad s <= 1 \\
2\sqrt{s} - 1 \quad s >= 1
\end{cases} ρ ( s ) = { s s < = 1 2 s − 1 s > = 1
殘差經過損失函數作用 r e s i d u a l s 4 × 1 = ρ ( ∥ r e s i d u a l s 4 × 1 ∥ 2 2 ) r e s i d u a l s 4 × 1 residuals_{4 \times 1} = \rho\left(\|residuals_{4 \times 1}\|_2^2\right)residuals_{4 \times 1} r e s i d u a l s 4 × 1 = ρ ( ∥ r e s i d u a l s 4 × 1 ∥ 2 2 ) r e s i d u a l s 4 × 1
以上j = {1, 2, 3, 4, 5}
該幀有迴環:
已知有 R o l d , T o l d , R i , T i , r e l a t i v e _ t , r e l a t i v e _ q , r e l a t i v e _ y a w R_{old}, \ \ T_{old}, \ \ R_i, \ \ T_i, \ \ relative\_t, \ \ relative\_q, \ \ relative\_yaw R o l d , T o l d , R i , T i , r e l a t i v e _ t , r e l a t i v e _ q , r e l a t i v e _ y a w
其中Δ T = r e l a t i v e _ t , Δ z A n g l e = r e l a t i v e _ y a w \Delta T = relative\_t, \ \ \Delta zAngle = relative\_yaw Δ T = r e l a t i v e _ t , Δ z A n g l e = r e l a t i v e _ y a w
C o n s A n g l e o l d = A n g l e o l d ConsAngle_{old} = Angle_{old} C o n s A n g l e o l d = A n g l e o l d
使用同樣的方式計算出殘差,不同的是,計算出的殘差在位移上的殘差添加了一個係數10, 此做法應該是,優化的過程中,我們更應該關注迴環的殘差。
r e s i d u a l s 4 × 1 = [ ( Δ T ′ − Δ T ) ∗ 10 A n g l e i . z − A n g l e i − j . z − Δ z A n g l e ] residuals_{4 \times 1} = \begin{bmatrix}
\left({\Delta T}' - \Delta T\right) * 10 \\
Angle_{i}.z - Angle_{i - j}.z - \Delta zAngle
\end{bmatrix} r e s i d u a l s 4 × 1 = [ ( Δ T ′ − Δ T ) ∗ 1 0 A n g l e i . z − A n g l e i − j . z − Δ z A n g l e ]
雅各比矩陣同樣使用的ceres的自動求導。
以上優化完之後,需要根據cur_KeyFrame, old_KeyFrame重新跟新漂移量 r _ d r i f t t _ d r i f t r\_drift \ \ t\_drift r _ d r i f t t _ d r i f t
已知R c u r , T c u r , O r i g i n R c u r , O r i g i n T c u r R_{cur}, \ \ T_{cur}, \ \ OriginR_{cur}, \ \ OriginT_{cur} R c u r , T c u r , O r i g i n R c u r , O r i g i n T c u r , 其中R c u r , T c u r R_{cur}, \ \ T_{cur} R c u r , T c u r 爲優化後的位姿,O r i g i n R c u r , O r i g i n T c u r OriginR_{cur}, \ \ OriginT_{cur} O r i g i n R c u r , O r i g i n T c u r 爲積分得到並經過劃窗優化的位姿
y a w _ d r i f t = A n g l e c u r . z − O r i g i n A n g l e c u r . z yaw\_drift = Angle_{cur}.z - OriginAngle_{cur}.z y a w _ d r i f t = A n g l e c u r . z − O r i g i n A n g l e c u r . z
r _ d r i f t = ( y a w _ d r i f t , 0 , 0 ) . M a t r i x 3 x 3 r\_drift = \left(yaw\_drift, 0, 0\right).Matrix_{3 x 3} r _ d r i f t = ( y a w _ d r i f t , 0 , 0 ) . M a t r i x 3 x 3
t _ d r i f t = T c u r − r _ d r i f t ∗ O r i g i n T c u r t\_drift = T_{cur} - r\_drift * OriginT_{cur} t _ d r i f t = T c u r − r _ d r i f t ∗ O r i g i n T c u r