A Policy Update Strategy in Model-free Policy Search: Expectation-Maximization

Expectation-Maximization Algorithm

Policy gradient methods require the user to specify the learning rate which can be problematic and often results in an unstable learning process or slow convergence. By formualting policy search as an inference problem with latent variables and using the EM algorithm to infer a new policy, this problem can be avoided since no learning rate is required.

The standard EM algorithm, which is well-known for determining the maximum likelihood solution of a probabilitistic latent variable model, takes the parameter update as a weighted maximum likelihood estimate which has a closed form solution for most of the used polices.

Let’s assume that :

y: observed random variable z: unobserved random variable pθ(y,z): parameterized joint distribution 
Given a data set Y=[y[1],,y[N]]T , we wanna approximate the parameter θ which means maximizing the log likelihood:
maxθlogpθ(Y,Z)
Since Z is latent variable we cannot solve the maximization problem directly. But by computing the expectation of Z , we can maximize the log-marginal likelihood of Y :
logpθ(Y)=i=1Nlogpθ(y[i])=i=1Nlogpθ(y[i],z)dz
However, it is evident that we cannot obtain a closed-form solution for the parameter θ of our probability model pθ(y,z) . Do you know what is closed-form solution?

Closed form solution :

An equation is said to be a closed-form solution if it solves a given problem in terms of functions and mathematical operations from a given generally-accepted set. For example, an infinite sum would generally not be considered closed-form. However, the choice of what to call closed-form and what not is rather arbitrary since a new “closed-form” function could simply be defined in terms of the infinite sum.

EM(Expectation-Maximization) is a powerful method to estimate parameterized latent variables. The basic idea behind is that if parameter θ is known then we can estimate the optimal latent variable Z in view of Y (E-Step); If latent variable Z is known, we can estimateθ by maximum likelihood estimation. EM method can be seen as a kind of coordinate descent method to maximize the lower-bound of the log likelihood.

The iterative procedure for estimating the maximum log-likelihood consists of two main segments: Expectation Steps and Maximization Steps as mentioned above. Assume that we begin at the θ0 . Then we execute the following iterative steps:

  • Based on θt estimating the expectation of the latent variable Zt .
  • Based on Y and Zt estimating the parameter θt+1 by maximum likelihood estimation.

In general, we do not feel like the expectation of Z but the distribution of Z , i.e. pθt(Z|Y) . To be specific, let’s introduce an auxiliary distribution q(Z) , which is variational, to decompose the marginal log-likelihood by using the identity pθ(Y)=pθ(Y,Z)/pθ(Z|Y) :

logpθ(Y)=logpθ(Y)q(Z)dZ=q(Z)logpθ(Y)dZ=q(Z)logq(Z)pθ(Y,Z)q(Z)pθ(Z|Y)dZ=q(Z)logpθ(Y,Z)q(Z)dZ+q(Z)logq(Z)pθ(Z|Y)dZ=Lθ(q)+KL(q(Z)pθ(Z|Y))
Since the KL divergence is always larger or equal to zero, the term Lθ(q) is a lower bound of the log-marginal likelihood.

E-Step

In E-step we update the variational distribution q(Z) by minimizing the KL divergence KL(q(Z)pθ(Z|Y)) , i.e. setting q(Z)=pθ(Z|Y) . Note that the value of the log-likelihood logpθ(Y) has nothing to do with the variational distribution q(Z) . In summary, E-step :

Update q(Z) Minimize KL(q(Z)pθ(Z|Y)) Set q(Z)=pθ(Z|Y)

M-Step

In M-step we optimize the lower bound w.r.t. θ , i.e.

θnew=argmaxθLθ(q)=argmaxθq(Z)logpθ(Y,Z)q(Z)dZ=argmaxθq(Z)logpθ(Y,Z)dZ+H(q)=argmaxθEq(Z)[logpθ(Y,Z)]=argmaxθQθ(q)
where H(q) denotes the entropy of q , Q is the expected complete data log-likelihood. The log now acts on the joint distribution directly. So M-step can be obtained in closed form. Moreover,
Qθ(q)=q(Z)logpθ(Y,Z)dZ=i=1Nqi(z)logpθ(y[i],z)dZ
The M-step is based on a weighted maximum likelihood estimate of θ using the complete data points [y[i],z] weighted by qi(z) . In summary, M-step:
Update θ Maximize Qθ(q)

Reformulate Policy Search as an Inference Problem

Let’s assume that :

Binary reward event R: observed variableTrajectory τ: unobserved variable
Maximizing the reward implies maximizing the probability of the reward event, and, hence, our trjectory distribution pθ(τ) needs to assign high probability to trajectories with high reward probability p(R=1|τ) .

We would like to find a parameter vector θ that maximizes the probability of the reward event, i.e.

logpθ(R)=τp(R|τ)pθ(τ)dτ
As for the standard EM algorithm, a variational distribution q(τ) is used to decompose the log-marginal likelihood into tow terms:
logpθ(R)=Lθ(q)+KL(q(τ)pθ(τ|R))
where the reward-weighted trajectory distribution:
pθ(τ|R)=p(R|τ)pθ(τ)pθ(R)=p(R|τ)pθ(τ)p(R|τ)pθ(τ)dτp(R|τ)pθ(τ)

E-Step

Update q(τ) Minimize KL(q(τ)pθ(τ|R)) Set q(τ)=pθ(τ|R)

M-Step

θnew=argmaxθLθ(q)=argmaxθq(τ)logpθ(R,τ)q(τ)dτ=argmaxθq(τ)logpθ(R,τ)dτ+H(q)=argmaxθq(τ)log(p(R|τ)pθ(τ))dτQθ(q)=argmaxθq(τ)logpθ(τ)dτ+f(q)=argminθq(τ)(logpθ(τ))dτ=argminθ[q(τ)logq(τ)pθ(τ)dτ+q(τ)log1q(τ)dτ]=argminθKL(q(τ)pθ(τ))
i.e.
Update θ Maximize Qθ(q) Minimize KL(q(τ)pθ(τ))

EM-based Policy Search Algorithms

MC-EM-algorithm uses a sample-based approximation for the variational distribution q , i.e. in the E-step, MC-EM minimizes the KL divergence KL(q(Z)pθ(Z|Y)) by using samples Zjpθ(Z|Y) . Subsequently, these samples Zj are used to estimate the expectation of the complete data log-liklihood:

Qθ(q)=j=1Klogpθ(Y,Zj)
In terms of policy search, MC_EM methods use samples τ[i] from the old trajectory distribution pθ to represent the variational distribution q(τ)p(R|τ)pθ(τ) over trajectories. As τ[i] has already been sampled from pθ(τ) , q(τ[i])p(R|τ[i]) . Consequently, in the M-step, we maximize:
Qθ(θ)=τ[i]pθ(τ)p(R|τ[i])logpθ(τ[i])

There are Episode-based EM-algorithms such as Reward-Weighted Regression(RWR) and Cost-Regularized Kernel Regression(CrKR), and Step-based EM-algorithms such as Episodic Reward-Weighted Regression(eRWR) and Policy Learning by Weighting Exploration with Returns(PoWER).

Variational Inference-based Methods

The MC-EM approach uses a weighted maximum likelihood estimate to obtain the new parameters θ of the policy. It averages over several modes of the reward function. Such a behavior might result in slow convergence to good policies as the average of several modes might be in an area with low reward.

The maximization used for the MC-EM approach is equivalent to minimizing:

KL(p(R|τ)pθ(τ)pθ(τ))=p(R|τ[i])pθ(τ[i])logp(R|τ)pθ(τ[i])pθ(τ[i])
w.r.t. parameter θ . This minimization is also called the Moment Projection of the reward-weighted trajectory distribution as it matches the moments of pθ(τ) with the moments of p(R|τ)pθ(τ) .

Alternatively, we can use the Information projection argminθKL(pθ(τ)p(R|τ)pθ(τ)) to update the policy. This projection forces the new trajectory distribution pθ(τ) to be zero everywhere where the reward-weighted trajectory distribution is zero.

  • Thanks J. Peters et al for their great work of A Survey on Policy Search for Robotics .
  • 感謝周志華——《機器學習》清華大學出版社
發佈了64 篇原創文章 · 獲贊 83 · 訪問量 17萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章