Intro
如果想省時間,建議直接看:Rainbow
Deep Q Network(Vanilla DQN)
抓兩個點:
- Replay Buffer
- Target Network
詳細見:https://www.cnblogs.com/dynmi/p/13994342.html
更新evaluate-network的損失函數:
\(Loss = (r + \gamma * max_{a' \in A}Q(s',a'|\theta^{-})-Q(s,a|\theta))^2\)
paper:
https://www.nature.com/articles/nature14236
official code:
https://sites.google.com/a/deepmind.com/dqn/ nature.com/nature
Double DQN
較Vanilla DQN只修改了TD target計算方法,它的損失函數是:
\(Loss = (r+ \gamma * Q(s',argmax_{a'}Q(s',a'|\theta)|\theta^{-})-Q(s,a|\theta))^2\)
Priorited Replay Buffer
詳細見:https://www.cnblogs.com/dynmi/p/14004610.html
Duelling network
針對DQN的模型構造作出修改,將最後一層分出兩個channel,然後對兩個channel合併作爲輸出。
- Action-independent value function \(V(s, v)\)
- Action-dependent advantage function \(A(s, a, w)\)
- \(Q(s, a) = V(s, v) + A(s, a, w)\)
結構圖對比:
Rainbow
正如其名“七色彩虹”,這個算法就是多個算法的糅合。
將Double DQN的TD Target, Prioritied Replay Buffer, Duelling DQN的模型結構,Multi-step Learning,Distribution RL,NoisyNet組合到一起,就成了結合體Rainbow。
Reference
- David Silver, ICML2016, Deep RL Tutorial