因爲主要研究方向其實是多智能體博弈,所以對單智能特別是策略梯度這裏,一直停留在面向github使用,然後提前批面試就被無情批評基礎很差啦。所以從今天開始耐心把理論梳理清楚吧!
主要參考文獻 Reinforcement Learning: An introduction,Sutton
主要參考課程 Intro to Reinforcement Learning,Bolei Zhou
相關文中代碼 https://github.com/ThousandOfWind/RL-basic-alg.git
策略梯度(PG)
首先定義遵從一般約定
s ∈ S , a ∈ A ( s ) , θ ∈ R d ′ s \in \mathcal{S}, a \in \mathcal{A}(s), \boldsymbol{\theta} \in \mathbb{R}^{d'} s ∈ S , a ∈ A ( s ) , θ ∈ R d ′
既然策略是 π ( a ∣ s , θ ) = Pr { A t = a ∣ S t = s , θ t = θ } \pi(a \mid s, \boldsymbol{\theta})=\operatorname{Pr}\left\{A_{t}=a \mid S_{t}=s, \boldsymbol{\theta}_{t}=\boldsymbol{\theta}\right\} π ( a ∣ s , θ ) = P r { A t = a ∣ S t = s , θ t = θ }
我們假設這個策略有表現J ( θ ) J\left(\boldsymbol{\theta}\right) J ( θ )
我們希望能夠根據J J J 的梯度更新策略 θ t + 1 = θ t + α ∇ J ( θ t ) ^ \boldsymbol{\theta}_{t+1}=\boldsymbol{\theta}_{t}+\alpha \widehat{\nabla J\left(\boldsymbol{\theta}_{t}\right)} θ t + 1 = θ t + α ∇ J ( θ t )
然後就想起來面試官問我了梯度和方向導數的含義,居然沒有答出來,好羞恥啊,
方向導數本質上研究的是函數在某點處沿某特定方向上的變化率問題,梯度反映的是空間變量變化趨勢的最大值和方向。方向導數與梯度在微分學中有重要的運用。
具體等我把相關算法搞定再去看高數
這樣問題就變成了怎麼把表現和策略聯合起來
然後sutton就說了,既然如此
J ( θ ) ≐ v π θ ( s 0 ) J(\boldsymbol{\theta}) \doteq v_{\pi_{\boldsymbol{\theta}}}\left(s_{0}\right) J ( θ ) ≐ v π θ ( s 0 )
v π ( s ) = [ ∑ a π ( a ∣ s ) q π ( s , a ) ] v_{\pi}(s)=\left[\sum_{a} \pi(a \mid s) q_{\pi}(s, a)\right] v π ( s ) = [ ∑ a π ( a ∣ s ) q π ( s , a ) ]
⚠️用狀態s 0 s_0 s 0 的價值是因爲他還假設了條件γ = 1 \gamma = 1 γ = 1 ,所以v π θ ( s 0 ) v_{\pi_{\boldsymbol{\theta}}}\left(s_{0}\right) v π θ ( s 0 ) 實際上能夠代表軌跡上的收益和
因此J J J 的梯度可以參考狀態值的梯度
∇ v π ( s ) = ∇ [ ∑ a π ( a ∣ s ) q π ( s , a ) ] , for all s ∈ S = ∑ a [ ∇ π ( a ∣ s ) q π ( s , a ) + π ( a ∣ s ) ∇ q π ( s , a ) ] = ∑ a [ ∇ π ( a ∣ s ) q π ( s , a ) + π ( a ∣ s ) ∇ ∑ s ′ , r p ( s ′ , r ∣ s , a ) ( r + v π ( s ′ ) ) ] = ∑ a [ ∇ π ( a ∣ s ) q π ( s , a ) + π ( a ∣ s ) ∇ ∑ s ′ p ( s ′ ∣ s , a ) v π ( s ′ ) ] = ∑ a [ ∇ π ( a ∣ s ) q π ( s , a ) + π ( a ∣ s ) ∇ ∑ s ′ p ( s ′ ∣ s , a ) ( ∑ a ′ [ ∇ π ( a ′ ∣ s ′ ) q π ( s ′ , a ′ ) + π ( a ′ ∣ s ′ ) ∇ ∑ s ′ ′ p ( s ′ ′ ∣ s ′ , a ′ ) v π ( s ′ ′ ) ] ) ]
\begin{aligned}
\nabla v_{\pi}(s) &=\nabla\left[\sum_{a} \pi(a \mid s) q_{\pi}(s, a)\right], \quad \text { for all } s \in \mathcal{S} \\
&=\sum_{a}\left[\nabla \pi(a \mid s) q_{\pi}(s, a)+\pi(a \mid s) \nabla q_{\pi}(s, a)\right] \\
&=\sum_{a}\left[\nabla \pi(a \mid s) q_{\pi}(s, a)+\pi(a \mid s) \nabla \sum_{s^{\prime}, r} p\left(s^{\prime}, r \mid s, a\right)\left(r+v_{\pi}\left(s^{\prime}\right)\right)\right]\\
&=\sum_{a}\left[\nabla \pi(a \mid s) q_{\pi}(s, a)+\pi(a \mid s) \nabla \sum_{s^{\prime} } p\left(s^{\prime} \mid s, a\right)v_{\pi}\left(s^{\prime}\right)\right]\\
&=\sum_{a}\left[\nabla \pi(a \mid s) q_{\pi}(s, a)+\pi(a \mid s) \nabla \sum_{s^{\prime} } p\left(s^{\prime} \mid s, a\right)\left(\sum_{a'}\left[\nabla \pi(a' \mid s') q_{\pi}(s', a')+\pi(a' \mid s') \nabla \sum_{s''} p\left(s'' \mid s', a'\right)v_{\pi}\left(s''\right)\right]\right)\right]\\
\end{aligned}
∇ v π ( s ) = ∇ [ a ∑ π ( a ∣ s ) q π ( s , a ) ] , for all s ∈ S = a ∑ [ ∇ π ( a ∣ s ) q π ( s , a ) + π ( a ∣ s ) ∇ q π ( s , a ) ] = a ∑ ⎣ ⎡ ∇ π ( a ∣ s ) q π ( s , a ) + π ( a ∣ s ) ∇ s ′ , r ∑ p ( s ′ , r ∣ s , a ) ( r + v π ( s ′ ) ) ⎦ ⎤ = a ∑ [ ∇ π ( a ∣ s ) q π ( s , a ) + π ( a ∣ s ) ∇ s ′ ∑ p ( s ′ ∣ s , a ) v π ( s ′ ) ] = a ∑ [ ∇ π ( a ∣ s ) q π ( s , a ) + π ( a ∣ s ) ∇ s ′ ∑ p ( s ′ ∣ s , a ) ( a ′ ∑ [ ∇ π ( a ′ ∣ s ′ ) q π ( s ′ , a ′ ) + π ( a ′ ∣ s ′ ) ∇ s ′ ′ ∑ p ( s ′ ′ ∣ s ′ , a ′ ) v π ( s ′ ′ ) ] ) ]
然後接下這一步其實我沒懂。。罪過。我看完這個立刻去學
= ∑ x ∈ S ∑ k = 0 ∞ Pr ( s → x , k , π ) ∑ a ∇ π ( a ∣ x ) q π ( x , a )
=\sum_{x \in \mathcal{S}} \sum_{k=0}^{\infty} \operatorname{Pr}(s \rightarrow x, k, \pi) \sum_{a} \nabla \pi(a \mid x) q_{\pi}(x, a)
= x ∈ S ∑ k = 0 ∑ ∞ P r ( s → x , k , π ) a ∑ ∇ π ( a ∣ x ) q π ( x , a )
anyway,
J ( θ ) ∝ ∑ s μ ( s ) ∑ a q π ( s , a ) ∇ π ( a ∣ s , θ )
J(\boldsymbol{\theta}) \propto \sum_{s} \mu(s) \sum_{a} q_{\pi}(s, a) \nabla \pi(a \mid s, \boldsymbol{\theta})
J ( θ ) ∝ s ∑ μ ( s ) a ∑ q π ( s , a ) ∇ π ( a ∣ s , θ )
符號∝ \propto ∝ 是“正比於”的意思,就很合理,策略梯度也因此有很好的收斂性保證
進一步,我們可以推
∑ s μ ( s ) ∑ a q π ( s , a ) ∇ π ( a ∣ s , θ ) = E π [ ∑ a q π ( S t , a ) ∇ π ( a ∣ S t , θ ) ] = E π [ ∑ a π ( a ∣ S t , θ ) q π ( S t , a ) ∇ π ( a ∣ S t , θ ) π ( a ∣ S t , θ ) ] = E π [ q π ( S t , A t ) ∇ ln ( π ( A t ∣ S t , θ ) ) ]
\begin{aligned}
\sum_{s} \mu(s) \sum_{a} q_{\pi}(s, a) \nabla \pi(a \mid s, \boldsymbol{\theta}) &
=\mathbb{E}_{\pi}\left[\sum_{a} q_{\pi}\left(S_{t}, a\right) \nabla \pi\left(a \mid S_{t}, \boldsymbol{\theta}\right)\right]\\
&=\mathbb{E}_{\pi}\left[\sum_{a} \pi(a \mid S_{t}, \boldsymbol{\theta}) q_{\pi}(S_{t}, a) \frac{\nabla \pi(a \mid S_{t}, \boldsymbol{\theta})} {\pi(a \mid S_{t}, \boldsymbol{\theta})}\right] \\
&= \mathbb{E}_{\pi}\left[q_{\pi}(S_{t},A_t) \nabla \operatorname{ln}(\pi(A_t \mid S_t, \boldsymbol{\theta}))\right]
\end{aligned}
s ∑ μ ( s ) a ∑ q π ( s , a ) ∇ π ( a ∣ s , θ ) = E π [ a ∑ q π ( S t , a ) ∇ π ( a ∣ S t , θ ) ] = E π [ a ∑ π ( a ∣ S t , θ ) q π ( S t , a ) π ( a ∣ S t , θ ) ∇ π ( a ∣ S t , θ ) ] = E π [ q π ( S t , A t ) ∇ l n ( π ( A t ∣ S t , θ ) ) ]
所以雖然J值是我們用來評估策略表現的,但是很多時候,我們說ln ( π ( a ∣ s , θ ) ) \operatorname{ln}(\pi(a \mid s, \boldsymbol{\theta})) l n ( π ( a ∣ s , θ ) ) 是score function
(我懷疑我又在胡說八道了。。。。
REINFORCE
如果用真實獎賞估計v π ( s ) v_{\pi}(s) v π ( s ) ,我們就得到了這個算法
他的核心思路是用MC方法估計q π ( S t , A t ) q_{\pi}(S_{t},A_t) q π ( S t , A t ) 項
也就是說
J ( θ ) ∝ E π [ G ( S t , A t ) ∇ ln ( π ( A t ∣ S t , θ ) ) ]
J(\boldsymbol{\theta}) \propto \mathbb{E}_{\pi}\left[G(S_{t},A_t) \nabla \operatorname{ln}(\pi(A_t \mid S_t, \boldsymbol{\theta}))\right]
J ( θ ) ∝ E π [ G ( S t , A t ) ∇ l n ( π ( A t ∣ S t , θ ) ) ]
突然意識到兩年了我都沒寫過MC的代碼,然後決定試試看
首先就是教科書裏的僞代碼
然後python實現是這個樣
def get_action ( self, observation, * arg) :
obs = th. FloatTensor( observation)
pi = self. pi( obs= obs)
m = Categorical( pi)
action_index = m. sample( )
self. log_pi_batch. append( ( m. log_prob( action_index) ) )
return int ( action_index) , pi
def learn ( self, memory) :
batch = memory. get_last_trajectory( )
G = copy. deepcopy( batch[ 'reward' ] [ 0 ] )
for index in range ( 2 , len ( G) + 1 ) :
G[ - index] += self. gamma * G[ - index + 1 ]
G = th. FloatTensor( G)
log_pi = th. stack( self. log_pi_batch)
J = - ( G * log_pi) . mean( )
self. writer. add_scalar( 'Loss/J' , J. item( ) , self. _episode )
self. optimiser. zero_grad( )
J. backward( )
grad_norm = th. nn. utils. clip_grad_norm_( self. params, 10 )
self. optimiser. step( )
self. _episode += 1
但是但是,
我先試了一下小車爬坡效果很不行,參考了周老師的tutorial裏推薦的源碼: https://github.com/cuhkrlcourse/RLexample/blob/master/ policygradient/reinforce.py
用的環境是’CartPole-v1’,然而效果還是很糟糕。。。
這裏貼貼一下結果
loss & score雖然說收斂了但是解很不好, 是我哪裏寫錯了嗎。。。
然後可視化了他的行動策略,紅色和綠色分別是一個動作
,這樣看的話它只有在做一個動作而已
REINFORCE-baseline
因爲上文用的是
∇ J ( θ ) ∝ E π [ G ( S t , A t ) ∇ ln ( π ( A t ∣ S t , θ ) ) ]
\nabla J(\boldsymbol{\theta}) \propto \mathbb{E}_{\pi}\left[G(S_{t},A_t) \nabla \operatorname{ln}(\pi(A_t \mid S_t, \boldsymbol{\theta}))\right]
∇ J ( θ ) ∝ E π [ G ( S t , A t ) ∇ l n ( π ( A t ∣ S t , θ ) ) ]
baseline的使用是因爲G ( S t , A t ) G(S_{t},A_t) G ( S t , A t ) 的會帶來很高的方差, 爲了緩解這個問題我們考慮存在一個baseline 滿足
∑ a b ( s ) ∇ π ( a ∣ s , θ ) = b ( s ) ∇ ∑ a π ( a ∣ s , θ ) = b ( s ) ∇ 1 = 0
\sum_{a} b(s) \nabla \pi(a \mid s, \boldsymbol{\theta})=b(s) \nabla \sum_{a} \pi(a \mid s, \boldsymbol{\theta})=b(s) \nabla 1=0
a ∑ b ( s ) ∇ π ( a ∣ s , θ ) = b ( s ) ∇ a ∑ π ( a ∣ s , θ ) = b ( s ) ∇ 1 = 0
這樣
∇ J ( θ ) ∝ ∑ s μ ( s ) ∑ a ( q π ( s , a ) − b ( s ) ) ∇ π ( a ∣ s , θ )
\nabla J(\boldsymbol{\theta}) \propto \sum_{s} \mu(s) \sum_{a}\left(q_{\pi}(s, a)-b(s)\right) \nabla \pi(a \mid s, \boldsymbol{\theta})
∇ J ( θ ) ∝ s ∑ μ ( s ) a ∑ ( q π ( s , a ) − b ( s ) ) ∇ π ( a ∣ s , θ )
然後一般會用狀態值作爲baseline
def get_action ( self, observation, * arg) :
obs = th. FloatTensor( observation)
pi = self. pi( obs= obs)
m = Categorical( pi)
action_index = m. sample( )
self. log_pi_batch. append( m. log_prob( action_index) )
self. b_batch. append( self. B( obs= obs) )
return int ( action_index) , pi
def learn ( self, memory) :
batch = memory. get_last_trajectory( )
G = copy. deepcopy( batch[ 'reward' ] [ 0 ] )
reward = th. FloatTensor( batch[ 'reward' ] [ 0 ] )
for index in range ( 2 , len ( G) + 1 ) :
G[ - index] += self. gamma * G[ - index + 1 ]
G = th. FloatTensor( G)
log_pi = th. stack( self. log_pi_batch)
b = th. stack( self. b_batch)
J = - ( ( G- b) . detach( ) * log_pi) . mean( )
value_loss = F. smooth_l1_loss( b. squeeze( - 1 ) , reward) . mean( )
loss = J + value_loss
self. writer. add_scalar( 'Loss/J' , J. item( ) , self. _episode)
self. writer. add_scalar( 'Loss/B' , value_loss. item( ) , self. _episode)
self. writer. add_scalar( 'Loss/loss' , loss. item( ) , self. _episode)
self. optimiser. zero_grad( )
loss. backward( )
grad_norm = th. nn. utils. clip_grad_norm_( self. params, 10 )
self. optimiser. step( )
self. _episode += 1
然而結果也很悲劇
對比DQN
因爲我很懷疑是不是我的代碼有什麼問題就決定對比一下dqn,雖然dqn發散,但是效果果然好很多,而且策略的顏色也是比較理想的那種
總結
onpolicy 果然收斂的很好然而效果不好
offpolicy 雖然不收斂但是又能看到效果
然而兩個結果都很糟糕,
可能原因包括
可能我的參數不好
網絡結構不夠好,一般經驗來看可能換rnn會好一些
我寫的代碼有問題。。。。等我找到原因會回來改改的