從深度強化學習的A3C算法代碼分析“分佈式Tensorflow的梯度累積與異步更新”

引言

Asynchronous Advantage Actor-Critic (A3C)

我們都知道,直接更新策略的方法,其迭代速度都是非常慢的,爲了充分利用計算資源,又有了Asynchronous Advantage Actor-Critic 方法:

可以看到,我們有一個主網絡,還有許多Worker,每一個Worker也是一個A2C的net,A3C主要有兩個操作,一個是pull,一個是push:
pull:把主網絡的參數直接賦予Worker中的網絡
push:使用各Worker中的梯度,對主網絡的參數進行更新

A3C代碼的實現地址爲:https://github.com/princewen/tensorflow_practice/tree/master/RL/Basic-A3C-Demo

A3C算法流程

具體代碼

# _*_ coding:utf-8 _*_
# !/usr/bin/python


import multiprocessing
import threading
import tensorflow as tf
import numpy as np
import gym
import os
import shutil
import matplotlib.pyplot as plt

GAME = 'Pendulum-v0'
OUTPUT_GRAPH = True
LOG_DIR = './log'
N_WORKERS = multiprocessing.cpu_count()
MAX_EP_STEP = 200
MAX_GLOBAL_EP = 2000
GLOBAL_NET_SCOPE = 'Global_Net'
UPDATE_GLOBAL_ITER = 10
GAMMA = 0.9
ENTROPY_BETA = 0.01
LR_A = 0.0001    # learning rate for actor
LR_C = 0.001    # learning rate for critic
GLOBAL_RUNNING_R = []
GLOBAL_EP = 0

env = gym.make(GAME)

N_S = env.observation_space.shape[0]
N_A = env.action_space.shape[0]
A_BOUND = [env.action_space.low, env.action_space.high]


# 這個 class 可以被調用生成一個 global net.
# 也能被調用生成一個 worker 的 net, 因爲他們的結構是一樣的,
# 所以這個 class 可以被重複利用.
class ACNet(object):
    def __init__(self,scope, globalAC=None):
        '''
        # 當創建 worker 網絡的時候, 我們傳入之前創建的 globalAC 給這個 worker
        if 這是 global:   # 判斷當下建立的網絡是 local 還是 global
        with tf.variable_scope('Global_Net'):
            self._build_net()
        else:
        with tf.variable_scope('worker'):
            self._build_net()
        
        # 接着計算 critic loss 和 actor loss
        # 用這兩個 loss 計算要推送的 gradients
        
        with tf.name_scope('sync'):  # 同步
            with tf.name_scope('pull'):
                # 更新去 global
            with tf.name_scope('push'):
                # 獲取 global 參數
        '''
        if scope == GLOBAL_NET_SCOPE:   # get global network
            with tf.variable_scope(scope):
                self.s = tf.placeholder(tf.float32, [None, N_S], 'S')
                self.a_params, self.c_params = self._build_net(scope)[-2:]
        else:   # local net, calculate losses
            with tf.variable_scope(scope):
                self.s = tf.placeholder(tf.float32, [None, N_S], 'S')
                self.a_his = tf.placeholder(tf.float32, [None, N_A], 'A')
                self.v_target = tf.placeholder(tf.float32, [None, 1], 'Vtarget')

                mu, sigma, self.v, self.a_params, self.c_params = self._build_net(scope)

                td = tf.subtract(self.v_target, self.v, name='TD_error')
                with tf.name_scope('c_loss'):
                    self.c_loss = tf.reduce_mean(tf.square(td))

                with tf.name_scope('wrap_a_out'):
                    mu, sigma = mu * A_BOUND[1], sigma + 1e-4

                normal_dist = tf.distributions.Normal(mu, sigma)

                with tf.name_scope('a_loss'):
                    log_prob = normal_dist.log_prob(self.a_his)
                    exp_v = log_prob * td
                    entropy = normal_dist.entropy()  # encourage exploration
                    self.exp_v = ENTROPY_BETA * entropy + exp_v
                    self.a_loss = tf.reduce_mean(-self.exp_v)

                with tf.name_scope('choose_a'):  # use local params to choose action
                    self.A = tf.clip_by_value(tf.squeeze(normal_dist.sample(1), axis=0), A_BOUND[0], A_BOUND[1])
                with tf.name_scope('local_grad'):
                    self.a_grads = tf.gradients(self.a_loss, self.a_params)
                    self.c_grads = tf.gradients(self.c_loss, self.c_params)

            with tf.name_scope('sync'): #A3C主要有兩個操作,一個是pull,一個是push:
                with tf.name_scope('pull'):#pull:把主網絡的參數直接賦予Worker中的網絡
                    self.pull_a_params_op = [l_p.assign(g_p) for l_p, g_p in zip(self.a_params, globalAC.a_params)]
                    self.pull_c_params_op = [l_p.assign(g_p) for l_p, g_p in zip(self.c_params, globalAC.c_params)]
                with tf.name_scope('push'): #push:使用各Worker中的梯度,對主網絡的參數進行更新
                    self.update_a_op = OPT_A.apply_gradients(zip(self.a_grads, globalAC.a_params))
                    self.update_c_op = OPT_C.apply_gradients(zip(self.c_grads, globalAC.c_params))



    def _build_net(self, scope):
        # 在這裏搭建 Actor 和 Critic 的網絡
        # return 均值, 方差, state_value
        w_init = tf.random_normal_initializer(0., .1)
        with tf.variable_scope('actor'):
            l_a = tf.layers.dense(self.s, 200, tf.nn.relu6, kernel_initializer=w_init, name='la')
            mu = tf.layers.dense(l_a, N_A, tf.nn.tanh, kernel_initializer=w_init, name='mu')
            sigma = tf.layers.dense(l_a, N_A, tf.nn.softplus, kernel_initializer=w_init, name='sigma')
        with tf.variable_scope('critic'):
            l_c = tf.layers.dense(self.s, 100, tf.nn.relu6, kernel_initializer=w_init, name='lc')
            v = tf.layers.dense(l_c, 1, kernel_initializer=w_init, name='v')  # state value
        a_params = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=scope + '/actor')
        c_params = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=scope + '/critic')
        return mu, sigma, v, a_params, c_params

    def update_global(self, feed_dict):
        # 進行 push 操作
        SESS.run([self.update_a_op, self.update_c_op], feed_dict)  # local grads applies to global net

    def pull_global(self):
        # 進行 pull 操作
        SESS.run([self.pull_a_params_op, self.pull_c_params_op])

    def choose_action(self, s):
        # 根據 s 選動作
        s = s[np.newaxis, :]
        return SESS.run(self.A, {self.s: s})[0]


class Worker(object):
    def __init__(self, name, globalAC):
        self.env = gym.make(GAME).unwrapped  # 創建自己的環境
        self.name = name  # 自己的名字
        self.AC = ACNet(name, globalAC)  # 自己的 local net, 並綁定上 globalAC

    def work(self):
        # s, a, r 的緩存, 用於 n_steps 更新
        global GLOBAL_RUNNING_R, GLOBAL_EP
        total_step = 1
        buffer_s, buffer_a, buffer_r = [], [], []
        while not COORD.should_stop() and GLOBAL_EP < MAX_GLOBAL_EP:
            s = self.env.reset()
            ep_r = 0
            for ep_t in range(MAX_EP_STEP):
                if self.name == 'W_0':    # 只顯示第一個worker
                    self.env.render()

                a = self.AC.choose_action(s)
                s_, r, done, info = self.env.step(a)
                done = True if ep_t == MAX_EP_STEP - 1 else False

                ep_r += r
                buffer_s.append(s)  # 添加各種緩存
                buffer_a.append(a)
                buffer_r.append((r + 8) / 8)  # normalize

                # 每 UPDATE_GLOBAL_ITER 步 或者回合完了, 進行 sync 操作
                if total_step % UPDATE_GLOBAL_ITER == 0 or done:
                    # 獲得用於計算 TD error 的 下一 state 的 value
                    if done:
                        v_s_ = 0  # terminal,達到目的,對未來的期望爲0
                    else:
                        v_s_ = SESS.run(self.AC.v, {self.AC.s: s_[np.newaxis, :]})[0, 0]

                    buffer_v_target = []  # 下 state value 的緩存, 用於算 TD
                    for r in buffer_r[::-1]:  # 進行 n_steps forward view
                        v_s_ = r + GAMMA * v_s_
                        buffer_v_target.append(v_s_)
                    buffer_v_target.reverse()

                    buffer_s, buffer_a, buffer_v_target = np.vstack(buffer_s), np.vstack(buffer_a), np.vstack(
                        buffer_v_target)

                    feed_dict = {
                        self.AC.s: buffer_s,
                        self.AC.a_his: buffer_a,
                        self.AC.v_target: buffer_v_target,
                    }

                    self.AC.update_global(feed_dict)  # 推送更新去 globalAC
                    buffer_s, buffer_a, buffer_r = [], [], []  # 清空緩存
                    self.AC.pull_global()  # 獲取 globalAC 的最新參數

                s = s_
                total_step += 1
                if done:
                    if len(GLOBAL_RUNNING_R) == 0:  # record running episode reward
                        GLOBAL_RUNNING_R.append(ep_r)
                    else:
                        GLOBAL_RUNNING_R.append(0.9 * GLOBAL_RUNNING_R[-1] + 0.1 * ep_r)
                    print(
                        self.name,
                        "Ep:", GLOBAL_EP,
                        "| Ep_r: %i" % GLOBAL_RUNNING_R[-1],
                    )

                    GLOBAL_EP += 1  # 加一回合
                    break  # 結束這回合

if __name__ == "__main__":
    SESS = tf.Session()
    with tf.device("/cpu:0"):
        OPT_A = tf.train.RMSPropOptimizer(LR_A, name='RMSPropA')
        OPT_C = tf.train.RMSPropOptimizer(LR_C, name='RMSPropC')
        GLOBAL_AC = ACNet(GLOBAL_NET_SCOPE)  # we only need its params
        workers = []
        # Create worker
        for i in range(N_WORKERS):
            i_name = 'W_%i' % i   # worker name
            workers.append(Worker(i_name, GLOBAL_AC))

    COORD = tf.train.Coordinator()   # 多線程調度的工具
    SESS.run(tf.global_variables_initializer())

    if OUTPUT_GRAPH:
        if os.path.exists(LOG_DIR):
            shutil.rmtree(LOG_DIR)
        tf.summary.FileWriter(LOG_DIR, SESS.graph)

    worker_threads = []
    for worker in workers:
        job = lambda: worker.work()
        t = threading.Thread(target=job)
        t.start()
        worker_threads.append(t)
    COORD.join(worker_threads)  # 主線程阻塞

    plt.plot(np.arange(len(GLOBAL_RUNNING_R)), GLOBAL_RUNNING_R)
    plt.xlabel('step')
    plt.ylabel('Total moving reward')
    plt.show()

    '''
    # Worker 並行工作
    with tf.device("/cpu:0"):
        GLOBAL_AC = ACNet(GLOBAL_NET_SCOPE)  # 建立 Global AC
        workers = []
        for i in range(N_WORKERS):  # 創建 worker, 之後在並行
            workers.append(Worker(GLOBAL_AC))   # 每個 worker 都有共享這個 global AC

    COORD = tf.train.Coordinator()  # Tensorflow 用於並行的工具

    worker_threads = []
    for worker in workers:
        job = lambda: worker.work()
        t = threading.Thread(target=job)    # 添加一個工作線程
        t.start()
        worker_threads.append(t)
    COORD.join(worker_threads)  # tf 的線程調度
    '''

分佈式Tensorflow的梯度累積與異步更新

最近在實現A3C論文【1】算法的過程中,發現目前目前網上還沒有太多資料講解如何進行梯度累積,對於tensorflow分佈式計算的異步更新也沒有實驗論證。因此將自己做的一點研究整理出來,還請各路大神指正。

一、問題描述

在Asynchronous methods中,使用了target network以避免網絡變化太快。每個learner在一個訓練epoch開始時會拷貝target network的權值, 訓練一段時間後將梯度累積並用之更新target network,之後結束這個epoch。以下是n-step Q-learning的更新方式:

 

因此,在實現中,首先需要讓各個learner能夠獲得target network的權值,然後要實現learner內部的梯度累積,最後要將這個梯度返回target network。

 

二、梯度計算、累積與更新

將梯度在target network和learner間傳遞的功能在distributed tensorflow中默認已經實現好了。Between-graph的方式中,每個thread會拷貝一份Graph,計算後回傳回主Graph。需要解決的主要是梯度累積的問題。

基本的思路是:

repeat:
     計算梯度
     存儲梯度
until 一定次數
將累積的梯度回傳至target network

具體要用到的是optimizer類的compute_gradients()和apply_gradients()兩種方法。以下分步講解

1. 定義操作

# Define input and output
with tf.name_scope('input'):
    x = tf.placeholder(tf.float32, name="x")
with tf.name_scope('weights'):
    w = tf.Variable(2.0, name='target_w')
with tf.name_scope('output'):
    y = tf.mul(x, w, name='y')
with tf.name_scope('real_output'):
    y_ = tf.placeholder(tf.float32, name="y_")

# Define train op
with tf.name_scope('train'):
   optimizer = tf.train.GradientDescentOptimizer(LEARNING_RATE)
with tf.name_scope('gradient'):
    loss = tf.reduce_mean(tf.square(y_ - y))  # MSE loss
    gradient_all = optimizer.compute_gradients(loss)  # gradient of network (with NoneType)
    grads_vars = [v for (g, v) in gradient_all if g is not None]  # all variable that has gradients
    gradient = optimizer.compute_gradients(loss, grads_vars)  # gradient of network (without NoneType)
    grads_holder = [(tf.placeholder(tf.float32, shape=g.get_shape()), v)
                     for (g, v) in gradient]
    train_op = optimizer.apply_gradients(grads_holder)

y_是真實值,y是網絡的輸出,loss是mse損失。optimizer是一個梯度下降優化器。gradient_all是optmizer計算的梯度,返回的是一個列表,其中的元素是型爲的tuple。注意,如果網絡不是單輸入單輸出(例如ac網絡中有兩個輸出),那麼compute_gradients可能會返回(None,v),即部分變量沒有對應的梯度,在下一步的時候NoneType會導致錯誤。因此,需要將有梯度的變量提取出來,記爲grads_vars。之後,對grads_vars再一次計算梯度,得到了gradient。最後, 生成一個placeholder用於存儲梯度,並調用apply_gradients更新網絡。

注意,此處定義的會出現問題,在第三節實驗部分會提出一個解決辦法。

2. 應用操作

# calculate gradients every  
grads = []
for i in range(THREAD_STEPS):
    x_i = ...
    y_real = ...
    y_i = sess.run(y, feed_dict={x: x_i})
    loss_i = sess.run(loss, feed_dict={x: x_i, y_: y_real})
    grad_i = sess.run(gradient, feed_dict={x: x_i, y_: y_real})
    grads.append(grad_i)

# calculate total gradients
grads_sum = {}
# add up dθ
for i in range(len(grads_holder)):
    k = grads_holder[i][0]
    grads_sum[k] = sum([g[i][0] for g in grads])

# Apply gradients
_ = sess.run(train_op, feed_dict=grads_sum)

操作分爲三步:

第一步,輸入x_i,y_計算梯度,並使用一個列表grads保存梯度;

第二步,使用一個字典對梯度進行累積。字典的格式爲

由於grads的每一個元素都是與grads_holder格式相同的梯度列表,grads_holder[i][0]對應的梯度值列表就是[g[i][0] for g in grads]。

第三步,將前一步生成的字典feed給apply_gradients,Mission complete。

三、實驗設置與結果

簡單驗證了下以上方法的正確性。使用兩個worker進行between-graph的異步訓練,輸入爲[0,1,2],網絡輸出爲,輸出真實值定爲10,損失爲MSE。的初始值爲2。優化器爲梯度下降法,學習率爲1。梯度計算公式爲:

 

我們設置了兩個worker,各進行2 epoch*3 steps的訓練。輸出如下:

task0 - epoch0: x_i:  0. y_i:  0.0. loss:  100.0. grad:  [(-0.0, 2.0)] task0 - epoch0: x_i:  1. y_i:  2.0. loss:  81.0. grad:  [(-18.0, 2.0)] task0 - epoch0: x_i:  2. y_i:  4.0. loss:  64.0. grad:  [(-32.0, 2.0)] task1 - epoch0: x_i:  0. y_i:  0.0. loss:  100.0. grad:  [(-0.0, 2.0)] Final States of w in task0 - epoch0:  52.0
task0 - epoch1: x_i:  0. y_i:  0.0. loss:  100.0. grad:  [(-0.0, 52.0)] task1 - epoch0: x_i:  1. y_i:  52.0. loss:  1681.0. grad:  [(82.0, 52.0)] 
...

首先,worker:0輸入x:0,網絡返回y:0,梯度計算爲0,第二步輸入x:1,y:2,梯度爲-18,依次類推。在worker:0進行到1st epoch的第三步時,worker:1啓動,注意此時w仍未2,網絡沒有發生改變,worker輸入x:0後網絡返回y:0 。

隨後worker:0更新了梯度,梯度累積爲, 新的權值更新爲 。梯度累積的功能成功實現。

但是,我們對於graph間變量傳遞的機制理解有誤,此時worker:1仍在第一個epoch,權值卻也被更新爲52(實際應保持該epoch開始時的2),本應爲-18的梯度變成了82。

解決的辦法是將thread和target network的權值分開。重新定義權值爲:

with tf.name_scope('weights'):
    target_w = tf.Variable(2.0, name='target_w')
    w_list = [tf.Variable(2.0, name='target_w') for i in range(WORKER_NUM)]
    w = w_list[FLAGS.task_index]

這樣,我們創建了一個長度爲thread數量的列表,用於存儲各個進程的權值,w_list[task_index]是每個進程實際使用的權值,target_w是target network的權值。之後,定義權值更新的動作:

epoch_init = w.assign(target_w)
w_addup = tf.placeholder(tf.float32)
epoch_update = target_w.assign_add(w_addup)

在每次epoch開始前我們使用tf.assign(ref,value)將target_w的權值賦予w,epoch結束後將訓練後權值與初始權值的差值增加給target_w。實驗結果如下:

task0 - epoch0:   x_i:  0. y_i:  0.0. loss:  100.0. grad:  [(-0.0, 2.0)] 
task0 - epoch0:   x_i:  1. y_i:  2.0. loss:  81.0. grad:  [(-18.0, 2.0)] 
task1 - epoch0:   x_i:  0. y_i:  0.0. loss:  100.0. grad:  [(-0.0, 2.0)] 
task0 - epoch0:   x_i:  2. y_i:  4.0. loss:  64.0. grad:  [(-32.0, 2.0)] 
task1 - epoch0:   x_i:  1. y_i:  2.0. loss:  81.0. grad:  [(-18.0, 2.0)] 
Final States of w in task0 - epoch0:  52.0
task0 - epoch1:   x_i:  0. y_i:  0.0. loss:  100.0. grad:  [(-0.0, 52.0)] 
task1 - epoch0:   x_i:  2. y_i:  4.0. loss:  64.0. grad:  [(-32.0, 2.0)] 
...

可以看到,worker:0已經完成了一次更新,梯度累積爲-50,在epoch1中,初試的權值已經變爲52。而worker:1的權值還是保持爲2.

...
task0 - epoch1:   x_i:  1. y_i:  52.0. loss:  1681.0. grad:  [(82.0, 52.0)]
Final States of w in task1 - epoch0:  52.0
task0 - epoch1:   x_i:  2. y_i:  104.0. loss:  8464.0. grad:  [(368.0, 52.0)]
Final States of w in task0 - epoch1:  -398.0
Final target_w:  -348.0
done
task1 - epoch1:   x_i:  0. y_i:  0.0. loss:  100.0. grad:  [(-0.0, 102.0)]
task1 - epoch1:   x_i:  1. y_i:  102.0. loss:  8281.0. grad:  [(182.0, 102.0)]
task1 - epoch1:   x_i:  2. y_i:  204.0. loss:  36864.0. grad:  [(768.0, 102.0)] 
Final States of w in task1 - epoch1:  -848.0
Final target_w:  -1298.0
done

由task1-epoch1的初始權值可以看出,worker:1新的權值爲 。task0-epoch1累積了的梯度,worker:0最終輸出的target_w爲102-450=-348。worker:1在epoch1累積了182+768=950的梯度,最終輸出target_w爲-348-950=-1298.計算與輸出一致。

至此,我們完成了梯度累積的異步更新的全過程。完整的Gist:allenwoods/async_grad_verify.py

 

引用:

[1] V. Mnih et al., “Asynchronous methods for deep reinforcement learning,” arXiv preprint arXiv:1602.01783, 2016.
[2] Tensorflow Python API : processing-gradients-before-applying-them

 

Tips:

1. 多進程使用GPU會導致OUT_OF_MEMORY_ERROR,這是由於tf默認會給任一進程分配所有能分配的顯存,這樣除了第一個進程其他進程都無顯存可用。解決辦法有兩個,一是在運行命令前添加 CUDA_VISIBLE_DEVICES=9999(或任一大於你的顯卡數的數字)禁用顯卡,推薦對ps進程使用。二是在server配置裏添加gpu_options=tf.GPUOptions(allow_growth=True)(或gpu_fraction)使得tf不會一次將顯存分配完而是隨着使用量逐漸增加,具體在以上提供的gist中有例子。

2. 運行命令爲

python  async_grad_test.py --ps_hosts=0.0.0.0:53198 --worker_hosts=0.0.0.0:58557,0.0.0.0:42832 --job_name=ps --task_index=0
python async_grad_test.py --ps_hosts=0.0.0.0:53198 --worker_hosts=0.0.0.0:58557,0.0.0.0:42832 --job_name=worker --task_index=0
python async_grad_test.py --ps_hosts=0.0.0.0:53198 --worker_hosts=0.0.0.0:58557,0.0.0.0:42832 --job_name=worker --task_index=1

 

3. ps進程通過Ctrl+c的方式無法關閉,需要通過kill的方式殺掉。批量關閉所有訓練進程可使用以下命令:

ps -ef | grep /opt/anaconda3/bin/python| grep async_grad | awk {'print $2'} | xargs kill

/opt/anaconda3/bin/python 是Python解釋器,async_grad是運行py文件的關鍵字,根據你的具體情況進行修改。

4. 運行Gist前需要先刪除checkpoint中之前的記錄,否則tf會認爲任務已經完成從而不進行任何操作。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章