關於TensorFlow進行多GPU並行訓練的一些技巧

因爲TensorFlow訓練時默認佔用所有GPU的顯存。

這樣如果有人想使用其他兩個GPU跑程序,就會因爲顯存不足而無法運行。 
所以需要人爲指定顯存佔用率或指定使用單張顯卡。

一、根據 TF官網tutorial部分的Using GPUs,可以總結四種方法:

  1. 第一種是使用 allow_growth,實現顯存運行時分配。當allow_growth設置爲True時,TF程序一開始被分配很少的顯存,在運行時根據需求增長而擴大顯存的佔用。

    config = tf.ConfigProto()  
    config.gpu_options.allow_growth = True  
    sess = tf.Session(config=config, ...)  

     

  2. 第二種是使用 per_process_gpu_memory_fraction,指定每個可用GPU的顯存分配率。在構造tf.Session()時候通過tf.GPUOptions配置參數,顯式地指定需要分配的顯存比例。

     
    #告訴TF它可以使用每塊GPU顯存的40%  
     
    config = tf.ConfigProto()
    config.gpu_options.per_process_gpu_memory_fraction = 0.4
    session = tf.Session(config=config, ...)

    這種方法指定了每個GPU進程的顯存佔用上限,但它會同時作用於所有GPU,不能對不同GPU設置不同的上限。

  3. 在運行訓練程序前,在用戶根目錄下配置環境(~/.bashrc):

    import os
    os.environ["CUDA_VISIBLE_DEVICES"] = '0,1'

     或:

    export CUDA_VISIBLE_DEVICES = NUM  

    NUM是用戶指定顯卡的序號(0,1,2…),可以先用 nvidia-smi 查看當前哪塊顯卡可用。但這種方法限制了用戶可見的GPU數量,比如你的其他程序在你的目錄裏無法選擇別的GPU; 你的程序也沒法使用multiple GPUs。

  4. 收集空閒GPU,按需分配指定數量

import GPUtil
import os
# 空閒GPU收集
g_c = 3
deviceIDs = GPUtil.getAvailable(order = 'first', limit = 8, maxLoad = 0.01, maxMemory = 0.01, includeNan=False, excludeID=[], excludeUUID=[])
deviceIDs = [6] if not deviceIDs else deviceIDs
os.environ["CUDA_VISIBLE_DEVICES"] = ",".join([str(gp) for gp in deviceIDs[:g_c]])
g_c = len(deviceIDs) if len(deviceIDs)< g_c else g_c # 實際空閒的GPU個數
print("free GPUs", deviceIDs)

 

二、MNIST多GPU訓練

多GPU並行可分爲模型並行和數據並行兩大類,上圖展示的是數據並行,這也是我們經常用到的方法,而其中數據並行又可分爲同步方式和異步方式兩種,由於我們一般都會配置同樣的顯卡,因此這兒也選擇了同步方式,也就是把數據分給不同的卡,等所有的GPU都計算完梯度後進行平均,最後再更新梯度。

首先要改造的就是數據讀取部分,由於現在我們有多快卡,每張卡要分到不同的數據,所以在獲取batch的時候要把大小改爲batch_x,batch_y=mnist.train.next_batch(batch_size*num_gpus),一次取足夠的數據保證每塊卡都分到batch_size大小的數據。然後我們對取到的數據進行切分,我們以i表示GPU的索引,連續的batch_size大小的數據分給同一塊GPU:

_x=X[i*batch_size:(i+1)*batch_size]
_y=Y[i*batch_size:(i+1)*batch_size]


由於我們多個GPU上共享同樣的圖,爲了防止名字混亂,最好使用name_scope進行區分,也就是如下的形式:

 

      for i in range(2):
            with tf.device("/gpu:%d"%i):
                with tf.name_scope("tower_%d"%i):
                    _x=X[i*batch_size:(i+1)*batch_size]
                    _y=Y[i*batch_size:(i+1)*batch_size]
                    logits=conv_net(_x,dropout,reuse_vars,True)


我們需要有個列表存儲所有GPU上的梯度,還有就是複用變量,需要在之前定義如下兩個值:

 tower_grads=[]
 reuse_vars=False


所有的準備工作都已完成,就可以計算每個GPU上的梯度了

opt = tf.train.AdamOptimizer(learning_rate)
                    loss=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=_y,logits=logits))
                    grads=opt.compute_gradients(loss)
                    reuse_vars=True
                    tower_grads.append(grads)


這樣tower_grads就存儲了所有GPU上所有變量的梯度,下面就是計算平均值了,這個是所有見過的函數中唯一一個幾乎從沒變過的代碼:

def average_gradients(tower_grads):
    average_grads=[]
    for grad_and_vars in zip(*tower_grads):
        grads=[]
        for g,_ in grad_and_vars:
            expend_g=tf.expand_dims(g,0)
            grads.append(expend_g)
        grad=tf.concat(grads,0)
        grad=tf.reduce_mean(grad,0)
        v=grad_and_vars[0][1]
        grad_and_var=(grad,v)
        average_grads.append(grad_and_var)
    return average_grads


tower_grads裏面保存的形式是(第一個GPU上的梯度,第二個GPU上的梯度,...第N-1個GPU上的梯度),這裏有一點需要注意的是zip(*),它的作用上把上面的那個列表轉換成((grad0_gpu0, var0_gpu0), ... , (grad0_gpuN, var0_gpuN))的形式,也就是以列訪問的方式,取到的就是某個變量在不同GPU上的值。

最後就是更新梯度了

grads=average_gradients(tower_grads)
train_op=opt.apply_gradients(grads)


上面的講述略有零散,最後我們給個全代碼版本方便大家測試:

import time
import numpy as np
 
import tensorflow as tf
from tensorflow.contrib import slim
from tensorflow.examples.tutorials.mnist import input_data
 
mnist = input_data.read_data_sets("mnist/", one_hot=True)
 
def get_available_gpus():
    """
    code from http://stackoverflow.com/questions/38559755/how-to-get-current-available-gpus-in-tensorflow
    """
    from tensorflow.python.client import device_lib as _device_lib
    local_device_protos = _device_lib.list_local_devices()
    return [x.name for x in local_device_protos if x.device_type == 'GPU']
 
num_gpus = len(get_available_gpus())
print("Available GPU Number :"+str(num_gpus))
 
num_steps = 1000
learning_rate = 0.001
batch_size = 1000
display_step = 10
 
num_input = 784
num_classes = 10
 
def conv_net_with_layers(x,is_training,dropout = 0.75):
    with tf.variable_scope("ConvNet", reuse=tf.AUTO_REUSE):
        x = tf.reshape(x, [-1, 28, 28, 1])
        x = tf.layers.conv2d(x, 12, 5, activation=tf.nn.relu)
        x = tf.layers.max_pooling2d(x, 2, 2)
        x = tf.layers.conv2d(x, 24, 3, activation=tf.nn.relu)
        x = tf.layers.max_pooling2d(x, 2, 2)
        x = tf.layers.flatten(x)
        x = tf.layers.dense(x, 100)
        x = tf.layers.dropout(x, rate=dropout, training=is_training)
        out = tf.layers.dense(x, 10)
        out = tf.nn.softmax(out) if not is_training else out
    return out
 
def conv_net(x,is_training):
    # "updates_collections": None is very import ,without will only get 0.10
    batch_norm_params = {"is_training": is_training, "decay": 0.9, "updates_collections": None}
    #,'variables_collections': [ tf.GraphKeys.TRAINABLE_VARIABLES ]
    with slim.arg_scope([slim.conv2d, slim.fully_connected],
                        activation_fn=tf.nn.relu,
                        weights_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.01),
                        weights_regularizer=slim.l2_regularizer(0.0005),
                        normalizer_fn=slim.batch_norm, normalizer_params=batch_norm_params):
        with tf.variable_scope("ConvNet",reuse=tf.AUTO_REUSE):
            x = tf.reshape(x, [-1, 28, 28, 1])
            net = slim.conv2d(x, 6, [5,5], scope="conv_1")
            net = slim.max_pool2d(net, [2, 2],scope="pool_1")
            net = slim.conv2d(net, 12, [5,5], scope="conv_2")
            net = slim.max_pool2d(net, [2, 2], scope="pool_2")
            net = slim.flatten(net, scope="flatten")
            net = slim.fully_connected(net, 100, scope="fc")
            net = slim.dropout(net,is_training=is_training)
            net = slim.fully_connected(net, num_classes, scope="prob", activation_fn=None,normalizer_fn=None)
            return net
 
def average_gradients(tower_grads):
    average_grads = []
    for grad_and_vars in zip(*tower_grads):
        grads = []
        for g, _ in grad_and_vars:
            expend_g = tf.expand_dims(g, 0)
            grads.append(expend_g)
        grad = tf.concat(grads, 0)
        grad = tf.reduce_mean(grad, 0)
        v = grad_and_vars[0][1]
        grad_and_var = (grad, v)
        average_grads.append(grad_and_var)
    return average_grads
 
PS_OPS = ['Variable', 'VariableV2', 'AutoReloadVariable']
def assign_to_device(device, ps_device='/cpu:0'):
    def _assign(op):
        node_def = op if isinstance(op, tf.NodeDef) else op.node_def
        if node_def.op in PS_OPS:
            return "/" + ps_device
        else:
            return device
 
    return _assign
 
def train():
    with tf.device("/cpu:0"):
        global_step=tf.train.get_or_create_global_step()
        tower_grads = []
        X = tf.placeholder(tf.float32, [None, num_input])
        Y = tf.placeholder(tf.float32, [None, num_classes])
        opt = tf.train.AdamOptimizer(learning_rate)
        with tf.variable_scope(tf.get_variable_scope()):
            for i in range(num_gpus):
                with tf.device(assign_to_device('/gpu:{}'.format(i), ps_device='/cpu:0')):
                    _x = X[i * batch_size:(i + 1) * batch_size]
                    _y = Y[i * batch_size:(i + 1) * batch_size]
                    logits = conv_net(_x, True)
                    tf.get_variable_scope().reuse_variables()
                    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=_y, logits=logits))
                    grads = opt.compute_gradients(loss)
                    tower_grads.append(grads)
                    if i == 0:
                        logits_test = conv_net(_x, False)
                        correct_prediction = tf.equal(tf.argmax(logits_test, 1), tf.argmax(_y, 1))
                        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
        grads = average_gradients(tower_grads)
        train_op = opt.apply_gradients(grads)
        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            for step in range(1, num_steps + 1):
                batch_x, batch_y = mnist.train.next_batch(batch_size * num_gpus)
                ts = time.time()
                sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
                te = time.time() - ts
                if step % 10 == 0 or step == 1:
                    loss_value, acc = sess.run([loss, accuracy], feed_dict={X: batch_x, Y: batch_y})
                    print("Step:" + str(step) + ":" + str(loss_value) + " " + str(acc)+", %i Examples/sec" % int(len(batch_x)/te))
            print("Done")
            print("Testing Accuracy:",
                  np.mean([sess.run(accuracy, feed_dict={X: mnist.test.images[i:i + batch_size],
                                                         Y: mnist.test.labels[i:i + batch_size]}) for i in
                           range(0, len(mnist.test.images), batch_size)]))
def train_single():
    X = tf.placeholder(tf.float32, [None, num_input])
    Y = tf.placeholder(tf.float32, [None, num_classes])
    logits=conv_net(X,True)
    loss=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y,logits=logits))
    opt=tf.train.AdamOptimizer(learning_rate)
    train_op=opt.minimize(loss)
    logits_test=conv_net(X,False)
    correct_prediction = tf.equal(tf.argmax(logits_test, 1), tf.argmax(Y, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for step in range(1,num_steps+1):
            batch_x, batch_y = mnist.train.next_batch(batch_size)
            sess.run(train_op,feed_dict={X:batch_x,Y:batch_y})
            if step%display_step==0 or step==1:
                loss_value,acc=sess.run([loss,accuracy],feed_dict={X:batch_x,Y:batch_y})
                print("Step:" + str(step) + ":" + str(loss_value) + " " + str(acc))
        print("Done")
        print("Testing Accuracy:",np.mean([sess.run(accuracy, feed_dict={X: mnist.test.images[i:i + batch_size],
              Y: mnist.test.labels[i:i + batch_size]}) for i in
              range(0, len(mnist.test.images), batch_size)]))
 
if __name__ == "__main__":
    #train_single()
    train()


如果有多張卡可以再寫個腳本控制使用哪張卡

export CUDA_VISIBLE_DEVICES=1,2
python train.py
 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章