概述
Guided Policy Search 可以說是Sergey Levine的成名之作,因此需要重點精讀一下,這篇文章因爲涉及到Model-based RL的setting,所以公式超級多,儘量配點intuitive的總結,我說這篇論文不難,你信🐎?
可直接跳到文末看一句話總結paper的思想。
一、 GPS的基礎知識
LQR基礎知識
深度強化學習CS285 lec10-lec12 Model Based RL
GPS的理論知識在這兩篇文章裏介紹,關於Model-based的強化學習的,現在稍微回顧並深入一下。
1.1 MBRL的Model已知
1.1.1 確定的dynamics model
回憶一下,Model-based RL的dynamics model是確定的 ,問題定義如下:
a 1 , a 2 , . . . , a T = arg max a 1 , a 2 , . . . , a T ∑ t = 1 T r ( s t , a t ) s . t s t + 1 = f ( s t , a t ) a_1,a_2,...,a_T=\argmax_{a_1,a_2,...,a_T}\sum_{t=1}^Tr(s_t,a_t)\\
s.t\quad s_{t+1}=f(s_t,a_t) a 1 , a 2 , . . . , a T = a 1 , a 2 , . . . , a T a r g m a x t = 1 ∑ T r ( s t , a t ) s . t s t + 1 = f ( s t , a t )
min u 1 , … , u T , x 1 , … , x T ∑ t = 1 T c ( x t , u t ) s.t. x t = f ( x t − 1 , u t − 1 )
\min _{\mathbf{u}_{1}, \ldots, \mathbf{u}_{T}, \mathbf{x}_{1}, \ldots, \mathbf{x}_{T}} \sum_{t=1}^{T} c\left(\mathbf{x}_{t}, \mathbf{u}_{t}\right) \text { s.t. } \mathbf{x}_{t}=f\left(\mathbf{x}_{t-1}, \mathbf{u}_{t-1}\right)
u 1 , … , u T , x 1 , … , x T min t = 1 ∑ T c ( x t , u t ) s.t. x t = f ( x t − 1 , u t − 1 ) 一個以reward的形式定義,一個以cost的形式定義,其中cost的定義是optimal control中的傳統定義,因此此處採用與paper相符合的第二種cost定義形式。
然後把約束寫進目標,就變成了下面這種形式:
min u 1 , … , u T c ( x 1 , u 1 ) + c ( f ( x 1 , u 1 ) , u 2 ) + ⋯ + c ( f ( f ( … ) … ) , u T )
\begin{array}{l}
{\min _{\mathbf{u}_{1}, \ldots, \mathbf{u}_{T}} c\left(\mathbf{x}_{1}, \mathbf{u}_{1}\right)+c\left(f\left(\mathbf{x}_{1}, \mathbf{u}_{1}\right), \mathbf{u}_{2}\right)+\cdots+c\left(f(f(\ldots) \ldots), \mathbf{u}_{T}\right)}
\end{array}
min u 1 , … , u T c ( x 1 , u 1 ) + c ( f ( x 1 , u 1 ) , u 2 ) + ⋯ + c ( f ( f ( … ) … ) , u T )
於是LQR就是對確定的dynamics model就是一階線性近似,對cost function進行二階近似進行求解,如下:
min u 1 , . . . , u T ∑ t = 1 T c ( x t , u t ) s . t x t = f ( x t − 1 , u t − 1 ) f ( x t , u t ) = F t [ x t u t ] + f t c ( x t , u t ) = 1 2 [ x t u t ] T C t [ x t u t ] + [ x t u t ] T c t \min_{u1,...,u_T}\sum_{t=1}^Tc(x_t,u_t)\quad s.t\quad x_t=f(x_{t-1},u_{t-1})\\
f(x_t,u_t)= F_t \left[ \begin{matrix} x_t\\u_t \end{matrix}\right]+f_t \\
c(x_t,u_t)=\frac{1}{2}\left[ \begin{matrix} x_t\\u_t \end{matrix}\right]^TC_t\left[ \begin{matrix} x_t\\u_t \end{matrix}\right]+\left[ \begin{matrix} x_t\\u_t \end{matrix}\right]^Tc_t
u 1 , . . . , u T min t = 1 ∑ T c ( x t , u t ) s . t x t = f ( x t − 1 , u t − 1 ) f ( x t , u t ) = F t [ x t u t ] + f t c ( x t , u t ) = 2 1 [ x t u t ] T C t [ x t u t ] + [ x t u t ] T c t
輸入初始狀態x 1 x_1 x 1 ,通過Backward Pass得到控制律:
u t = K t x t + k t , t = 1 , 2 , . . . , T
u_t=K_tx_t+k_t,t=1,2,...,T
u t = K t x t + k t , t = 1 , 2 , . . . , T
然後Forward Pass解出最優控制序列:
x 2 = f ( x 1 , u 1 ) u 2 = K 2 x 2 + k 2 x 3 = f ( x 2 , u 2 ) ⋮ u T = K T x T + k T
x_2=f(x_1,u_1)\\
u_2=K_2x_2+k_2\\
x_3=f(x_2,u_2)\\
\vdots \\
u_T=K_Tx_T+k_T
x 2 = f ( x 1 , u 1 ) u 2 = K 2 x 2 + k 2 x 3 = f ( x 2 , u 2 ) ⋮ u T = K T x T + k T
LQR致命的缺點就是假設了dynamics model即f ( x t , u t ) f(x_t,u_t) f ( x t , u t ) 是線性的函數,cost function即c ( x t , u t ) c(x_t,u_t) c ( x t , u t ) 是二次的函數,如果環境dynamics model或者cost funciton更加複雜,如stochastic dynamics model,就得用iLQR了!
1.1.2 隨機的dynamics model
當dynamics model 是隨機的 ,問題定義如下:
a 1 , . . . , a T = arg min a 1 , . . . , a T E s ′ ∼ p ( s ′ ∣ s , a ) [ ∑ t = 1 T c ( s t , a t ) ∣ a 1 , . . . , a T ] p ( s 1 , s 2 , . . . , s T ∣ a 1 , . . . , a T ) = p ( s 1 ) ∏ t = 1 T p ( s t + 1 ∣ s t , a t )
a_1,...,a_T=\argmin_{a_1,...,a_T}E_{s'\sim p(s'|s,a)}\Big[\sum_{t=1}^Tc(s_t,a_t)\Big|a_1,...,a_T\Big]\\
p(s_1,s_2,...,s_T|a_1,...,a_T)=p(s_1)\prod_{t=1}^Tp(s_{t+1}|s_t,a_t)
a 1 , . . . , a T = a 1 , . . . , a T a r g m i n E s ′ ∼ p ( s ′ ∣ s , a ) [ t = 1 ∑ T c ( s t , a t ) ∣ ∣ ∣ a 1 , . . . , a T ] p ( s 1 , s 2 , . . . , s T ∣ a 1 , . . . , a T ) = p ( s 1 ) t = 1 ∏ T p ( s t + 1 ∣ s t , a t )
比如iLQR利用高斯分佈的形式,均值參數是線性函數來擬合環境更復雜的動態特性。
i L Q R : f ( x t , u t ) = N ( F t [ x t u t ] + f t , Σ t )
iLQR:f(x_t,u_t)=N\big(F_t \left[ \begin{matrix} x_t\\u_t \end{matrix}\right]+f_t,\Sigma_t\big)
i L Q R : f ( x t , u t ) = N ( F t [ x t u t ] + f t , Σ t )
iLQR的輸入是一條人工初始化的軌跡x ^ t , u ^ t \hat x_t,\hat u_t x ^ t , u ^ t ,其中每一個LQR的輸入是新舊軌跡之間的差值x t − x ^ t , u t − u ^ t x_t-\hat x_t,u_t-\hat u_t x t − x ^ t , u t − u ^ t ,經過Backward Pass知道控制律u t = K t ( x t − x ^ t ) + k t + u ^ t u_t=K_t(x_t-\hat x_t)+k_t+\hat u_t u t = K t ( x t − x ^ t ) + k t + u ^ t ,再通過Forward Pass知道一條較優軌跡x t , u t x_t,u_t x t , u t
對於這個控制律,還可以加一個line search的參數如下:
u t = K t ( x t − x ^ t ) + α k t + u ^ t u_t = K_t(x_t-\hat x_t)+\alpha k_t + \hat u_t u t = K t ( x t − x ^ t ) + α k t + u ^ t
進而繼續用高斯分佈的形式來表示iLQR的Controller,也即policy爲:
N ( K t ( x t − x ^ t ) + α k t + u ^ t , σ t ) N(K_t(x_t-\hat x_t)+\alpha k_t + \hat u_t,\sigma_t) N ( K t ( x t − x ^ t ) + α k t + u ^ t , σ t ) 稱爲time-varying linear model
1.2 學習Model
LQR用在已知的Model是線性的,iLQR用在已知的model更爲複雜的形式,但這個Model怎麼學習呢?(這個dynamics model模型是p ( s ′ ∣ s , a ) p(s'|s,a) p ( s ′ ∣ s , a ) )
這個dynamics model常見的類型有:
Gaussian process model :Data-efficient但train得慢,對捕捉非平滑的dynamics不靈敏
Neural Network:泛化性能好,但需要大量數據即( s , a , s ′ ) ∈ D (s,a,s')\in D ( s , a , s ′ ) ∈ D
Bayesian Linear Regression:可以給個prior,再擬合,上述兩者的折中
從Divide and Conquer的角度去看這個dynamics model的話,學習的是一個f : S × A → S f:S\times A\rightarrow S f : S × A → S 的映射,自然地,我們觀測到的( s , a , s ′ ) (s,a,s') ( s , a , s ′ ) 很難cover到整個state space,而且即使cover到了,state space的不同地方可能有完全不同的dynamics,難以用一個global模型去擬合整個映射,因此想掌握一個複雜的dynamics model有local models與global model之分。
通俗地理解,initial policy converge如π θ ( a ∣ s ) \pi_\theta(a|s) π θ ( a ∣ s ) 收斂到optimal policy如π θ ∗ ( a ∣ s ) \pi_{\theta^*}(a|s) π θ ∗ ( a ∣ s ) 的過程中,π θ ( a ∣ s ) \pi_\theta(a|s) π θ ( a ∣ s ) 與π θ ∗ ( a ∣ s ) \pi_{\theta^*}(a|s) π θ ∗ ( a ∣ s ) 探索到state space肯定是不一樣的,因此在dynamics model的學習過程中也會遇到同樣的問題,如果用很傻的policy去收集transition如( s , a , s ′ ) (s,a,s') ( s , a , s ′ ) ,那探索得到的dynamics model肯定是不完整的,難以cover到足夠的最優policy的空間。
把人當agent,環境當dynamics model來做個比喻:小孩即initial policy收集的環境model是玩具、遊戲、電競這種熟悉的數據,基於這些數據訓練的環境 model能預測大佬即optimal policy所處的環境如天下大事,預料之中?
所以訓練一個完美的global model是很難的,需要解決數據的covariate shift的問題,因此從local model迭代做起!所以當沒有環境model時,即Unknown Dynamics 。最好的方法肯定是邊看邊學,即initial policy收集一些trajectories,對這些trajectories擬合一個local dynamics model,用其進行policy update,再用update policy收集一些trajectories,再擬合一個local dynamics model,如此下去通往optimal policy。
1.3 Unknown Dynamics
概括如下:
Run controller (Policy) on robot to collect trajectories
Fit dynamics model to collected trajectories
Update policy with new dynamics model
然後下面開始細化具體步驟。
1.3.1 Run the controller (policy)
現在有一些初始的trajectories如{ τ ( i ) } i = 1 N \{\tau^{(i)}\}_{i=1}^N { τ ( i ) } i = 1 N ,然後用iLQR優化以下目標:
min ∑ t = 1 T E ( x t , u t ) [ c ( x t , u t ) − H ( u t ∣ x t ) ] \min\sum_{t=1}^TE_{(x_t,u_t)}[c(x_t,u_t)-H(u_t|x_t)] min t = 1 ∑ T E ( x t , u t ) [ c ( x t , u t ) − H ( u t ∣ x t ) ] 就得到一個Controller或Policy即:
N ( K t ( x t − x ^ t ) + α k t + u ^ t , Q u t , u t − 1 ) N(K_t(x_t-\hat x_t)+\alpha k_t + \hat u_t,Q_{u_t,u_t}^{-1}) N ( K t ( x t − x ^ t ) + α k t + u ^ t , Q u t , u t − 1 )
1.3.2 Fitting dynamics
利用上述Controller與環境真實的dynamics交互得到一些transition即( x t , u t , x t + 1 ) (x_t,u_t,x_{t+1}) ( x t , u t , x t + 1 ) ,跑一個Bayesian linear regression進行dynamics model的擬合:
p ( x t + 1 ∣ x t , u t ) = N ( A t x t + B t u t + c , σ ) p(x_{t+1}|x_t,u_t)=N(A_tx_t+B_tu_t+c,\sigma) p ( x t + 1 ∣ x t , u t ) = N ( A t x t + B t u t + c , σ )
1.3.3 Improve the controller
因爲dynamics model是local的,所以當policy update利用這個local model進行iLQR時,就要得讓新軌跡τ \tau τ 與舊軌跡τ ˉ \bar\tau τ ˉ 之間在一定範圍內接近。公式描述如下:
p ( τ ) = p ( x 1 ) ∏ t = 1 T p ( u t ∣ x t ) p ( x t + 1 ∣ x t , u t ) p ( u t ∣ x t ) = N ( K t ( x t − x ^ t ) + k t + u ^ t , Σ t ) p(\tau)=p(x_1)\prod_{t=1}^Tp(u_t|x_t)p(x_{t+1}|x_t,u_t)\\
p(u_t|x_t)=N(K_t(x_t-\hat x_t)+k_t+\hat u_t,\Sigma_t) p ( τ ) = p ( x 1 ) t = 1 ∏ T p ( u t ∣ x t ) p ( x t + 1 ∣ x t , u t ) p ( u t ∣ x t ) = N ( K t ( x t − x ^ t ) + k t + u ^ t , Σ t )
優化問題變爲:
min p ( τ ) ∑ t = 1 T E p ( x t , u t ) [ c ( x t , u t ) ] s . t D K L ( p ( τ ) ∣ ∣ p ˉ ( τ ) ) ≤ ϵ \min_{p(\tau)}\sum_{t=1}^TE_{p(x_t,u_t)}\Big[c(x_t,u_t)\Big]\\s.t \quad D_{KL}(p(\tau)||\bar p(\tau))\leq\epsilon p ( τ ) min t = 1 ∑ T E p ( x t , u t ) [ c ( x t , u t ) ] s . t D K L ( p ( τ ) ∣ ∣ p ˉ ( τ ) ) ≤ ϵ
p ( τ ) = p ( x 1 ) ∏ t = 1 T p ( u t ∣ x t ) p ( x t + 1 ∣ x t , u t ) p ˉ ( τ ) = p ( x 1 ) ∏ t = 1 T p ˉ ( u t ∣ x t ) p ( x t + 1 ∣ x t , u t ) p(\tau)=p(x_1)\prod_{t=1}^Tp(u_t|x_t)p(x_{t+1}|x_t,u_t)\\
\bar p(\tau)=p(x_1)\prod_{t=1}^T\bar p(u_t|x_t)p(x_{t+1}|x_t,u_t) p ( τ ) = p ( x 1 ) t = 1 ∏ T p ( u t ∣ x t ) p ( x t + 1 ∣ x t , u t ) p ˉ ( τ ) = p ( x 1 ) t = 1 ∏ T p ˉ ( u t ∣ x t ) p ( x t + 1 ∣ x t , u t )
D K L ( p ( τ ) ∣ ∣ p ˉ ( τ ) ) = E p ( τ ) [ l o g p ( τ ) p ˉ ( τ ) ] = E p ( τ ) [ l o g p ( τ ) − l o g p ˉ ( τ ) ] = E ( x t , u t ) ∼ p ( τ ) [ ∑ t = 1 T ( l o g p ( u t ∣ x t ) − l o g p ˉ ( u t ∣ x t ) ) ] = ∑ t = 1 T [ E p ( x t , u t ) l o g p ( u t ∣ x t ) + E p ( x t , u t ) [ − l o g p ˉ ( u t ∣ x t ) ] ] = ∑ t = 1 T [ E p ( x t ) E p ( u t ∣ x t ) l o g p ( u t ∣ x t ) + E p ( x t , u t ) [ − l o g p ˉ ( u t ∣ x t ) ] ] = ∑ t = 1 T [ − E p ( x t ) [ H ( p ( u t ∣ x t ) ) ] + E p ( x t , u t ) [ − l o g p ˉ ( u t ∣ x t ) ] ] = ∑ t = 1 T E p ( x t ) [ H [ p , p ˉ ] − H [ p ] ] = ∑ t = 1 T E p ( x t , u t ) [ − l o g p ˉ ( u t ∣ x t ) − H [ p ( u t ∣ x t ) ] ]
\begin{aligned}
D_{KL}(p(\tau)||\bar p(\tau))&=E_{p(\tau)}\Big[log\frac{p(\tau)}{\bar p(\tau)}\Big]\\
&=E_{p(\tau)}\Big[logp(\tau)-log\bar p(\tau)\Big]\\
&=E_{(x_t,u_t)\sim p(\tau)}\Big[\sum_{t=1}^T\Big(logp(u_t|x_t)-log\bar p(u_t|x_t)\Big)\Big]\\
&=\sum_{t=1}^T\Big[E_{p(x_t,u_t)}logp(u_t|x_t)+E_{p(x_t,u_t)}\big[-log\bar p(u_t|x_t)\big]\Big]\\
&=\sum_{t=1}^T\Big[E_{p(x_t)}E_{p(u_t|x_t)}logp(u_t|x_t)+E_{p(x_t,u_t)}\big[-log\bar p(u_t|x_t)\big]\Big]\\
&=\sum_{t=1}^T\Big[-E_{p(x_t)}\big[H\big(p(u_t|x_t)\big)\big]+E_{p(x_t,u_t)}\big[-log\bar p(u_t|x_t)\big]\Big]\\
&=\sum_{t=1}^TE_{p(x_t)}\Big[H[p,\bar p]-H[p]\Big]\\
&=\sum_{t=1}^TE_{p(x_t,u_t)}\big[-log\bar p(u_t|x_t)-H[p(u_t|x_t)]\big]
\end{aligned}
D K L ( p ( τ ) ∣ ∣ p ˉ ( τ ) ) = E p ( τ ) [ l o g p ˉ ( τ ) p ( τ ) ] = E p ( τ ) [ l o g p ( τ ) − l o g p ˉ ( τ ) ] = E ( x t , u t ) ∼ p ( τ ) [ t = 1 ∑ T ( l o g p ( u t ∣ x t ) − l o g p ˉ ( u t ∣ x t ) ) ] = t = 1 ∑ T [ E p ( x t , u t ) l o g p ( u t ∣ x t ) + E p ( x t , u t ) [ − l o g p ˉ ( u t ∣ x t ) ] ] = t = 1 ∑ T [ E p ( x t ) E p ( u t ∣ x t ) l o g p ( u t ∣ x t ) + E p ( x t , u t ) [ − l o g p ˉ ( u t ∣ x t ) ] ] = t = 1 ∑ T [ − E p ( x t ) [ H ( p ( u t ∣ x t ) ) ] + E p ( x t , u t ) [ − l o g p ˉ ( u t ∣ x t ) ] ] = t = 1 ∑ T E p ( x t ) [ H [ p , p ˉ ] − H [ p ] ] = t = 1 ∑ T E p ( x t , u t ) [ − l o g p ˉ ( u t ∣ x t ) − H [ p ( u t ∣ x t ) ] ]
所以對這個優化問題使用拉格朗日變成無約束目標再優化其對偶問題:
min p ( τ ) ∑ t = 1 T E p ( x t , u t ) [ c ( x t , u t ) ] s . t D K L ( p ( τ ) ∣ ∣ p ˉ ( τ ) ) ≤ ϵ \min_{p(\tau)}\sum_{t=1}^TE_{p(x_t,u_t)}\Big[c(x_t,u_t)\Big]\\s.t \quad D_{KL}(p(\tau)||\bar p(\tau))\leq\epsilon p ( τ ) min t = 1 ∑ T E p ( x t , u t ) [ c ( x t , u t ) ] s . t D K L ( p ( τ ) ∣ ∣ p ˉ ( τ ) ) ≤ ϵ
原問題變爲:
min p max λ ∑ t = 1 T E p ( x t , u t ) [ c ( x t , u t ) ] + λ ( D K L ( p ( τ ) ∣ ∣ p ˉ ( τ ) ) − ϵ ) \min_p\max_\lambda \sum_{t=1}^TE_{p(x_t,u_t)}\Big[c(x_t,u_t)\Big]+\lambda(D_{KL}(p(\tau)||\bar p(\tau))-\epsilon) p min λ max t = 1 ∑ T E p ( x t , u t ) [ c ( x t , u t ) ] + λ ( D K L ( p ( τ ) ∣ ∣ p ˉ ( τ ) ) − ϵ )
對偶問題變爲:
max λ min p ∑ t = 1 T E p ( x t , u t ) [ c ( x t , u t ) ] + λ ( D K L ( p ( τ ) ∣ ∣ p ˉ ( τ ) ) − ϵ ) \max_\lambda \min_p\sum_{t=1}^TE_{p(x_t,u_t)}\Big[c(x_t,u_t)\Big]+\lambda(D_{KL}(p(\tau)||\bar p(\tau))-\epsilon) λ max p min t = 1 ∑ T E p ( x t , u t ) [ c ( x t , u t ) ] + λ ( D K L ( p ( τ ) ∣ ∣ p ˉ ( τ ) ) − ϵ )
然後固定λ \lambda λ 優化內項:
min p ∑ t = 1 T E p ( x t , u t ) [ c ( x t , u t ) − λ l o g p ˉ ( u t ∣ x t ) − λ H ( p ( u t ∣ x t ) ) ] − λ ϵ = min p ∑ t = 1 T E p ( x t , u t ) [ 1 λ c ( x t , u t ) − l o g p ˉ ( u t ∣ x t ) − H ( p ( u t ∣ x t ) ) ] − ϵ = min p ∑ t = 1 T E p ( x t , u t ) [ c ˉ ( x t , u t ) − H ( p ( u t ∣ x t ) ) ] − ϵ
\begin{aligned}
&\min_p\sum_{t=1}^TE_{p(x_t,u_t)}\Big[c(x_t,u_t)-\lambda log\bar p(u_t|x_t)-\lambda H(p(u_t|x_t))\Big]-\lambda\epsilon\\
&= \min_p \sum_{t=1}^TE_{p(x_t,u_t)}[\frac{1}{\lambda}c(x_t,u_t)-log\bar p(u_t|x_t)-H(p(u_t|x_t))]-\epsilon\\
&= \min_p \sum_{t=1}^TE_{p(x_t,u_t)}[\bar c(x_t,u_t)-H(p(u_t|x_t))]-\epsilon
\end{aligned}
p min t = 1 ∑ T E p ( x t , u t ) [ c ( x t , u t ) − λ l o g p ˉ ( u t ∣ x t ) − λ H ( p ( u t ∣ x t ) ) ] − λ ϵ = p min t = 1 ∑ T E p ( x t , u t ) [ λ 1 c ( x t , u t ) − l o g p ˉ ( u t ∣ x t ) − H ( p ( u t ∣ x t ) ) ] − ϵ = p min t = 1 ∑ T E p ( x t , u t ) [ c ˉ ( x t , u t ) − H ( p ( u t ∣ x t ) ) ] − ϵ 對這個Objective 跑一個iLQR,得到一個新的controller policy p ∗ ( u t ∣ x t ) p^*(u_t|x_t) p ∗ ( u t ∣ x t )
然後將p ∗ ( u t ∣ x t ) p^*(u_t|x_t) p ∗ ( u t ∣ x t ) 代入目標,優化外項λ \lambda λ :
λ ← + α ( D K L ( p ( τ ) ∣ ∣ p ˉ ( τ ) ) − ϵ ) \lambda \leftarrow + \alpha (D_{KL}(p(\tau)||\bar p(\tau))-\epsilon) λ ← + α ( D K L ( p ( τ ) ∣ ∣ p ˉ ( τ ) ) − ϵ )
1.4 小總結
如果上面公式看得迷糊,這裏總結一下流程:
如果dynamics model已知,是確定性的話,根據初始狀態,跑一下LQR就可以知道動作序列了(比如五子棋dynamics model直接人爲寫好了,離散而且好寫)
如果dynamics model已知,是隨機性的話,初始化一條軌跡,跑一下iLQR的Backward Pass就得到一個比較好的controller,即u t = K t ( x t − x ^ t ) + k t + u ^ t u_t = K_t(x_t-\hat x_t)+k_t + \hat u_t u t = K t ( x t − x ^ t ) + k t + u ^ t ,於是然後希望它有一些泛化性就變成一個高斯controller即N ( K t ( x t − x ^ t ) + k t , Q u t , u t − 1 ) N(K_t(x_t-\hat x_t)+k_t,Q_{u_t,u_t}^{-1}) N ( K t ( x t − x ^ t ) + k t , Q u t , u t − 1 ) ,使用這個controller跑一下iLQR的Forward Pass經歷真實的dynamics從而得到一條新的軌跡,迭代下去就可以知道動作序列了~
如果dynamics model未知,隨機收集一下transitions,然後用Bayesian linear regression去擬合一個local dynamics model,利用這個local dynamics model跑iLQR,得到高斯controller即N ( K t ( x t − x ^ t ) + k t , Q u t , u t − 1 ) N(K_t(x_t-\hat x_t)+k_t,Q_{u_t,u_t}^{-1}) N ( K t ( x t − x ^ t ) + k t , Q u t , u t − 1 ) ,用它跑一些transitions,然後fitting local dynamics model,繼續iLQR,如此迭代下去,就可以得到比較好的policy了即動作序列。
二、Guided Policy Search
大家發現了沒?GPS的基礎知識都是傳統方法呀!名爲Trajectory Optimization,所以很複雜(不忍卒讀)= =完全沒有神經網絡呀!它的dynamics model是由Bayesian linear regression得到的,它的controller(policy)是由iLQR得到的,神經網絡呢!!!?因此,現在就引入一個global policy 或global controller,即圖上的π θ ( u t ∣ o t ) \pi_\theta(u_t|o_t) π θ ( u t ∣ o t ) 即π θ ( u t ∣ x t ) \pi_\theta(u_t|x_t) π θ ( u t ∣ x t )
那麼現在正式進入正題!
首先問:傳統方法得到的controller即N ( K t ( x t − x ^ t ) + k t , Q u t , u t − 1 ) N(K_t(x_t-\hat x_t)+k_t,Q_{u_t,u_t}^{-1}) N ( K t ( x t − x ^ t ) + k t , Q u t , u t − 1 ) 有什麼問題?
答: 這丫需要先訓練一個dynamics model,然後一個expert demonstration作爲初始軌跡,然後從中得到一個time-varying linear Gaussian controller,因此該controller只能在這個demonstration附近有用,離這個controller比較遠的情況就不知道了(所以迭代的時候纔要求新controller離舊controller比較近呀!)所以泛化性能超級差!一個expert demonstration而已呀!
Guided Policy Search有什麼用?
答: 從expert demonstrations ,用傳統方法可以學出好多個controllers呀!於是就可以用監督學習的方式,將這些controller學習到的sub-optimal behavior且泛化差的專家知識,全部放到一個神經網絡中!而不再是不同的初始軌跡狀態,用不同的controller去完成,一個神經網絡就足夠了!
接下來看看流程!
notation稍微有點不一樣。下面文字解釋一下:
用iLQR的方法從expert demonstrations中弄出n個controllers即π 1 , . . . , π n \pi_1,...,\pi_n π 1 , . . . , π n
分別用這個n個controllers收集m個trajectories即τ 1 , . . . , τ m \tau_1,...,\tau_m τ 1 , . . . , τ m
利用m個trajectories中的( s , a ) (s,a) ( s , a ) 最大似然監督學習,得到一個初始的網絡policy即π θ ∗ \pi_{\theta^*} π θ ∗
用π 1 , . . . , π n , π θ ∗ \pi_1,...,\pi_n,\pi_{\theta^*} π 1 , . . . , π n , π θ ∗ 與環境交互收集一些trajectories的樣本爲S S S (與Buffer超級像了啦!)
進入迭代過程
從S S S 中Sample一個子集S k S_k S k (與Off-Policy超級像了啦!)
終於是強化裏面的Policy Gradient的Objective了,加了個Importance Sampling以及normalize the importance weights,得到一個新的policyπ θ k \pi_{\theta^k} π θ k (論文中是用LBFGS的優化方法弄的)
用π θ k \pi_{\theta^k} π θ k 跑一些samples加到樣本集S S S 中
對r ˉ ( x t , u t ) = r ( x t , u t ) + l o g π θ k ( u t ∣ x t ) \bar r(x_t,u_t)=r(x_t,u_t)+log\pi_{\theta^k}(u_t|x_t) r ˉ ( x t , u t ) = r ( x t , u t ) + l o g π θ k ( u t ∣ x t ) 這個跑一下iLQR得一個controller,跑一些adaptive samples出來
然後下面的就如圖了
三、總結
Guided Policy Search的想法,很樸素!
一句話總結就是通過傳統方法利用expert demonstrations弄一些泛化差的controllers出來,然後用expressive的模型去擬合這些controllers。
所以Guidance來自於由傳統方法得到的“局部專家”
然後這思想被瘋狂延伸,比如2016 NIPS的GAIL這篇Paper,利用擬合expert demonstrations的occupancy measure來指導Policy Update的過程!