強化學習-Vanilla Policy Gradient(VPG)

Background

策略梯度背後的關鍵思想是提高導致更高回報的操作的概率,並降低導致低迴報的操作的概率,直到獲得最佳策略。

Quick Facts

  • VPG 是一個on-policy算法
  • VPG 能用於連續或者離散動作空間的環境
  • 結合MPI可以有並行運算的VPG

Key Equations

πθπ_θ表示參數爲 θ 的策略,J(πθ)J(π_θ)表示策略的有限步長無折扣收益的期望。 J(πθ)J(π_θ)的梯度爲:θJ(πθ)=Eτπθ[t=0Tθlogπθ(atst)Aπθ(st,at)]\nabla_{\theta} J(\pi_{\theta}) = \underset{\tau \sim \pi_{\theta}}E[{ \sum_{t=0}^{T} \nabla_{\theta} \log \pi_{\theta}(a_t|s_t) A^{\pi_{\theta}}(s_t,a_t) }]其中τ\tau是一個軌跡,AπθA^{\pi_\theta}是當前策略的優勢函數。

策略梯度算法通過策略表現的隨機梯度上升來更新策略參數:θk+1=θk+αθJ(πθk)\theta_{k+1} = \theta_k + \alpha \nabla_{\theta} J(\pi_{\theta_k})儘管其他情況有使用有限步長無折扣策略梯度公式,策略梯度實現通常基於無限步長折扣收益來計算優勢函數估計值。

Exploration vs. Exploitation

VPG以一種按on-policy方式訓練隨機策略。 這意味着它將根據最新版本的隨機策略通過採樣動作來進行探索。 動作選擇的隨機性取決於初始條件和訓練程序。 在訓練過程中,由於更新規則鼓勵該策略利用已發現的獎勵,因此該策略通常變得越來越少隨機性。 這可能會導致策略陷入局部最優狀態。

Pseudocode

在這裏插入圖片描述

Documentation

spinup.vpg(env_fn, actor_critic=, ac_kwargs={}, seed=0, steps_per_epoch=4000, epochs=50, gamma=0.99, pi_lr=0.0003, vf_lr=0.001, train_v_iters=80, lam=0.97, max_ep_len=1000, logger_kwargs={}, save_freq=10)
Parameters:

  • env_fn – A function which creates a copy of the environment. The environment must satisfy the OpenAI Gym API.
  • actor_critic – A function which takes in placeholder symbols for state, x_ph, and action, a_ph, and returns the main outputs from the agent’s Tensorflow computation graph:在這裏插入圖片描述
  • ac_kwargs (dict) – Any kwargs appropriate for the actor_critic function you provided to VPG.
  • seed (int) – Seed for random number generators.
  • steps_per_epoch (int) – Number of steps of interaction (state-action pairs) for the agent and the environment in each epoch.
  • epochs (int) – Number of epochs of interaction (equivalent to number of policy updates) to perform.
  • gamma (float) – Discount factor. (Always between 0 and 1.)
  • pi_lr (float) – Learning rate for policy optimizer.
  • vf_lr (float) – Learning rate for value function optimizer.
  • train_v_iters (int) – Number of gradient descent steps to take on value function per epoch.
  • lam (float) – Lambda for GAE-Lambda. (Always between 0 and 1, close to 1.)
  • max_ep_len (int) – Maximum length of trajectory / episode / rollout.
  • logger_kwargs (dict) – Keyword args for EpochLogger.
  • save_freq (int) – How often (in terms of gap between epochs) to save the current policy and value function.

Referances

Policy Gradient Methods for Reinforcement Learning with Function Approximation, Sutton et al. 2000

Optimizing Expectations: From Deep Reinforcement Learning to Stochastic Computation Graphs, Schulman 2016(a)

Benchmarking Deep Reinforcement Learning for Continuous Control, Duan et al. 2016

High Dimensional Continuous Control Using Generalized Advantage Estimation, Schulman et al. 2016(b)

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章