Policy gradient methods require the user to specify the learning rate which can be problematic and often results in an unstable learning process or slow convergence. By formualting policy search as an inference problem with latent variables and using the EM algorithm to infer a new policy, this problem can be avoided since no learning rate is required.
The standard EM algorithm, which is well-known for determining the maximum likelihood solution of a probabilitistic latent variable model, takes the parameter update as a weighted maximum likelihood estimate which has a closed form solution for most of the used polices.
Let’s assume that :
yzpθ(y,z): observed random variable : unobserved random variable : parameterized joint distribution
Given a data set Y=[y[1],…,y[N]]T , we wanna approximate the parameter θ which means maximizing the log likelihood:
maxθlogpθ(Y,Z)
Since Z is latent variable we cannot solve the maximization problem directly. But by computing the expectation of Z , we can maximize the log-marginal likelihood of Y :
logpθ(Y)=∑i=1Nlogpθ(y[i])=∑i=1Nlog∫pθ(y[i],z)dz
However, it is evident that we cannot obtain a closed-form solution for the parameter θ of our probability model pθ(y,z) . Do you know what is closed-form solution?
An equation is said to be a closed-form solution if it solves a given problem in terms of functions and mathematical operations from a given generally-accepted set. For example, an infinite sum would generally not be considered closed-form. However, the choice of what to call closed-form and what not is rather arbitrary since a new “closed-form” function could simply be defined in terms of the infinite sum.
EM(Expectation-Maximization) is a powerful method to estimate parameterized latent variables. The basic idea behind is that if parameter θ is known then we can estimate the optimal latent variable Z in view of Y (E-Step); If latent variable Z is known, we can estimateθ by maximum likelihood estimation. EM method can be seen as a kind of coordinate descent method to maximize the lower-bound of the log likelihood.
The iterative procedure for estimating the maximum log-likelihood consists of two main segments: Expectation Steps and Maximization Steps as mentioned above. Assume that we begin at the θ0 . Then we execute the following iterative steps:
Based on θt estimating the expectation of the latent variable Zt .
Based on Y and Zt estimating the parameter θt+1 by maximum likelihood estimation.
In general, we do not feel like the expectation of Z but the distribution of Z , i.e. pθt(Z|Y) . To be specific, let’s introduce an auxiliary distributionq(Z) , which is variational, to decompose the marginal log-likelihood by using the identity pθ(Y)=pθ(Y,Z)/pθ(Z|Y) :
Since the KL divergence is always larger or equal to zero, the term Lθ(q) is a lower bound of the log-marginal likelihood.
E-Step
In E-step we update the variational distribution q(Z) by minimizing the KL divergence KL(q(Z)∥pθ(Z|Y)) , i.e. setting q(Z)=pθ(Z|Y) . Note that the value of the log-likelihood logpθ(Y) has nothing to do with the variational distribution q(Z) . In summary, E-step :
Update q(Z)⇐ Minimize KL(q(Z)∥pθ(Z|Y))⇐ Set q(Z)=pθ(Z|Y)
M-Step
In M-step we optimize the lower bound w.r.t. θ , i.e.
where H(q) denotes the entropy of q , Q is the expected complete data log-likelihood. The log now acts on the joint distribution directly. So M-step can be obtained in closed form. Moreover,
Maximizing the reward implies maximizing the probability of the reward event, and, hence, our trjectory distribution pθ(τ) needs to assign high probability to trajectories with high reward probability p(R=1|τ) .
We would like to find a parameter vector θ that maximizes the probability of the reward event, i.e.
logpθ(R)=∫τp(R|τ)pθ(τ)dτ
As for the standard EM algorithm, a variational distribution q(τ) is used to decompose the log-marginal likelihood into tow terms:
logpθ(R)=Lθ(q)+KL(q(τ)∥pθ(τ|R))
where the reward-weighted trajectory distribution:
MC-EM-algorithm uses a sample-based approximation for the variational distribution q , i.e. in the E-step, MC-EM minimizes the KL divergence KL(q(Z)∥pθ(Z|Y)) by using samples Zj∼pθ(Z|Y) . Subsequently, these samples Zj are used to estimate the expectation of the complete data log-liklihood:
Qθ(q)=∑j=1Klogpθ(Y,Zj)
In terms of policy search, MC_EM methods use samples τ[i] from the old trajectory distribution pθ′ to represent the variational distribution q(τ)∝p(R|τ)pθ′(τ) over trajectories. As τ[i] has already been sampled from pθ′(τ) , q(τ[i])∝p(R|τ[i]) . Consequently, in the M-step, we maximize:
Qθ(θ′)=∑τ[i]∼pθ′(τ)p(R|τ[i])logpθ(τ[i])
There are Episode-based EM-algorithms such as Reward-Weighted Regression(RWR) and Cost-Regularized Kernel Regression(CrKR), and Step-based EM-algorithms such as Episodic Reward-Weighted Regression(eRWR) and Policy Learning by Weighting Exploration with Returns(PoWER).
Variational Inference-based Methods
The MC-EM approach uses a weighted maximum likelihood estimate to obtain the new parameters θ of the policy. It averages over several modes of the reward function. Such a behavior might result in slow convergence to good policies as the average of several modes might be in an area with low reward.
The maximization used for the MC-EM approach is equivalent to minimizing:
w.r.t. parameter θ . This minimization is also called the Moment Projection of the reward-weighted trajectory distribution as it matches the moments of pθ(τ) with the moments of p(R|τ)pθ′(τ) .
Alternatively, we can use the Information projectionargminθKL(pθ(τ)∥p(R|τ)pθ′(τ)) to update the policy. This projection forces the new trajectory distribution pθ(τ) to be zero everywhere where the reward-weighted trajectory distribution is zero.
Thanks J. Peters et al for their great work of A Survey on Policy Search for Robotics .