如何在tensorflow中進行FineTuning


本篇博客參考的博客有如下,同時也非常感謝該博主

參考博客


測試所用的代碼在我的github上,https://github.com/Alienge/learnFineTuning


由於代碼主要涉及一些tensorflow的基本知識,因此註釋比較少,有時間我把註釋補全

1.主要內容

         本篇博客基於tensorflow講如何用兩種方式進行finetuning,並以tensorflow自帶的手寫數字識別數據進行測試。

         兩種方式的finetuning按保存類型文件名來區分,分別是ckpt文件和pd文件。ckpt文件在tensorflow中使用使用tf.train.Saver類進行保存。ckpt文件保存了訓練網絡的全部信息,包括所有的網絡圖節點和所有的權值數據。在pd文件中使用convert_variable_to_constants(sess,sess.graph_def,['op_name'])保存的訓練結果和網絡圖結構的節點。pd文件保存的內容op_name這個操作之前與之相關連的所有圖結構和權值,並且weights是以常量的形式保存因此不需要指定tf.stop_gradient。使用 convert_variables_to_constants進行保存的pd文件, 相比較於ckpt文件會少很多冗餘的信息,而且pd文件更小,可移植性也比方便。


2.測試的網絡結構

      

        以一個簡單的3層的NN網絡訓練,然後固定前面的2層的weights,再加一層普通網絡和softmax網絡進行finetuning,下面簡要的畫下網絡的圖

本圖使用windows下的畫圖工具,忽略博主的繪圖水平,最好使用viso繪圖

圖中顏色相同的矩形框表示需要固定住的權重的網絡層,上圖中的下面一個網絡結構是finetuning需要加上的層數。


3.實驗代碼塊

        看代碼之前介紹代碼中的幾個部分,方便看代碼。代碼中函數

def _bias_variable 表示偏置量的設置
def _weight_variable 表示權重的設置
def inference 表示網絡的構建
def loss 計算網絡的lossfunction
def train 訓練網絡

(恢復網絡之前注意一點,對網絡圖中的操作命名很重要,因爲恢復網絡的操作節點是按照命名來恢復的)

3.1 ckpt文件保存的網絡結構

        介紹tf.train.Saver保存的網絡

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

from tensorflow.examples.tutorials.mnist import input_data
from datetime import datetime
import os
mnist = input_data.read_data_sets('MNIST_data',one_hot=True)

import tensorflow as tf

FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_integer('batch_size',100,'''batch_size''')
tf.app.flags.DEFINE_integer('traing_epoches',15,'''epoch''')
tf.app.flags.DEFINE_string('check_point_dir','./','check_ponint_dir')
def _bias_variable(name,shape,initializer):
    var = tf.get_variable(name, shape, initializer=initializer, dtype=tf.float32)
    return var
def _weight_variable(name,shape,std):
    return _bias_variable(name, shape, initializer=tf.truncated_normal_initializer(stddev=std,dtype=tf.float32),
                          )
def inference(x):
    with tf.variable_scope('layer1') as scope:
        weights = _weight_variable('weights',[784,256],0.04)
        bias = _bias_variable('bias',[256],tf.constant_initializer(0.1))
        layer1 = tf.nn.relu(tf.matmul(x,weights)+bias,name=scope.name)
    with tf.variable_scope('layer2') as scope:
        weights = _weight_variable('weights',[256,128],std=0.02)
        bias = _bias_variable('bias',[128],tf.constant_initializer(0.2))
        layer2 = tf.nn.relu(tf.matmul(layer1,weights)+bias,name=scope.name)
    with tf.variable_scope('softmax_linear') as scope:
        weights = _weight_variable('weights',[128,10],std=1/192.0)
        bias = _bias_variable('bias',[10],tf.constant_initializer(0.0))
        softmax_linear = tf.add(tf.matmul(layer2,weights),bias,name=scope.name)
    return softmax_linear

def loss(logits,labels):
    print(labels.get_shape().as_list())
    print(logits.get_shape().as_list())
    labels = tf.cast(labels,tf.int64)
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=tf.argmax(labels,1),logits=logits,name = 'cross_entropy')
    cross_entropy_mean  = tf.reduce_mean(cross_entropy,name = 'cross_entropy')
    return cross_entropy_mean

def train():
    with tf.name_scope("input"):
        x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
        y = tf.placeholder(tf.float32, shape=[None, 10], name='y')
    softmax_linear = inference(x)
    cost = loss(softmax_linear,y)
    opt = tf.train.AdamOptimizer(0.001).minimize(cost)
    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(softmax_linear, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
    saver = tf.train.Saver()
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for epoch in range(FLAGS.traing_epoches):
            avg_cost = 0.0
            total_batch = int(mnist.train.num_examples/FLAGS.batch_size)
            for _ in range(total_batch):
                batch_xs,batch_ys = mnist.train.next_batch(FLAGS.batch_size)
                sess.run(opt,feed_dict={x:batch_xs,y:batch_ys})
                cost_ = sess.run(cost,feed_dict={x:batch_xs,y:batch_ys})
            print(("%s epoch: %d,cost: %.6f")%(datetime.now(),epoch+1,cost_))
            if (epoch+1) % 5 == 0:
                check_point_file = os.path.join(FLAGS.check_point_dir,'my_test_model')
                saver.save(sess,check_point_file,global_step=epoch+1)
        mean_accuary = sess.run(accuracy,{x:mnist.test.images,y:mnist.test.labels})
        print("accuracy %3.f"%mean_accuary)
    print()

def main(_):
   train()


if __name__ == '__main__':
  tf.app.run()


與保存網絡相關的代碼我用紅色標出,下面也單獨列出來了

saver = tf.train.Saver()
 if (epoch+1) % 5 == 0:
                check_point_file = os.path.join(FLAGS.check_point_dir,'my_test_model')
                saver.save(sess,check_point_file,global_step=epoch+1)

        這兩部分的代碼是用來保存網絡結構和參數的,會生成ckpt的四個文件,我們重點關注meta文件,因爲裏面存儲了所構建的網絡圖結構。在恢復網絡的時候,重點是這個meta文件和ckpt文件,重點看如何恢復網絡

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

from tensorflow.examples.tutorials.mnist import input_data
from datetime import datetime
import os

mnist = input_data.read_data_sets('MNIST_data',one_hot=True)
import tensorflow as tf
def _bias_variable(name,shape,initializer):
    var = tf.get_variable(name, shape, initializer=initializer, dtype=tf.float32)
    return var
def _weight_variable(name,shape,std):
    return _bias_variable(name, shape, initializer=tf.truncated_normal_initializer(stddev=std,dtype=tf.float32),
                          )
def inference(input):
    with tf.variable_scope('layer3') as scope:
        weights = _weight_variable('weights',[128,64],std=0.001)
        bias = _bias_variable('bias',[64],tf.constant_initializer(0.0))
        layer3 = tf.nn.relu(tf.matmul(input, weights) + bias, name=scope.name)
    with tf.variable_scope('softmax_linear') as scope:
        weights = _weight_variable('weights', [64, 10], std=1 / 192.0)
        bias = _bias_variable('bias', [10], tf.constant_initializer(0.0))
        softmax_linear = tf.add(tf.matmul(layer3, weights), bias, name=scope.name)
    return softmax_linear

def loss(logits,labels):
    labels = tf.cast(labels,tf.int64)
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=tf.argmax(labels,1),logits=logits,name = 'cross_entropy')
    cross_entropy_mean  = tf.reduce_mean(cross_entropy,name = 'cross_entropy')
    return cross_entropy_mean


batch_size = 100
training_epoch = 20
with tf.Graph().as_default() as g:
    saver = tf.train.import_meta_graph('./my_test_model-15.meta')
    x_place = g.get_tensor_by_name('input/x:0')
    y_place = g.get_tensor_by_name('input/y:0')
    weight_test = g.get_tensor_by_name('layer1/weights:0')
    layer2 = g.get_tensor_by_name('layer2/layer2:0')
    layer2 = tf.stop_gradient(layer2,name='stop_gradient')
    soft_result = inference(layer2)
    cost = loss(soft_result,y_place)
    opt = tf.train.AdamOptimizer(0.001).minimize(cost)
    correct_prediction = tf.equal(tf.argmax(y_place, 1), tf.argmax(soft_result, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
with tf.Session(graph=g) as sess:
    value=[]
    saver.restore(sess, tf.train.latest_checkpoint('./'))
    sess.run(tf.global_variables_initializer())
    for epoch in range(training_epoch):
        avg_cost = 0.0
        total_batch = int(mnist.train.num_examples / batch_size)
        for _ in range(total_batch):
            batch_xs, batch_ys = mnist.train.next_batch(batch_size)
            sess.run(opt, feed_dict={x_place: batch_xs, y_place: batch_ys})
            cost_ = sess.run(cost, feed_dict={x_place: batch_xs, y_place: batch_ys})
            weight_test_value = sess.run(weight_test,feed_dict={x_place: batch_xs, y_place: batch_ys})
        print(("%s epoch: %d,cost: %.6f") % (datetime.now(), epoch + 1, cost_))
        if (epoch+1) % 5 == 0:
            value.append(weight_test_value)
    for i in range(len(value)-1):
        if value[i].all()==value[i+1].all():
            print("weight is equal")
    mean_accuary = sess.run(accuracy, {x_place: mnist.test.images, y_place: mnist.test.labels})
    print("accuracy %3.f" % mean_accuary)



恢復網絡的圖節點請看第一個紅色的標註,而恢復權重則看第二段的紅色標註

黃色代碼部分是我用來測試第二層網絡中的weights經過多次迭代後是否固定(不變)


下面單獨提出來

 saver = tf.train.import_meta_graph('./my_test_model-15.meta')
    x_place = g.get_tensor_by_name('input/x:0')
    y_place = g.get_tensor_by_name('input/y:0')
    weight_test = g.get_tensor_by_name('layer1/weights:0')
    layer2 = g.get_tensor_by_name('layer2/layer2:0')
    layer2 = tf.stop_gradient(layer2,name='stop_gradient')

        使用tf.train.import_meta_graph來恢復網絡,並用g.get_tensor_by_name按照構建原來網絡的name來獲取每個圖節點的操作等,由於ckpt文件中並不是以常量的形式進行保存,在第二段標紅位置處把weights加載到我們模型時,我們還需要設置bp的時候,設置tf.stop_gradient不要再往後面再傳遞梯度了。 而後面的網絡搭建按照正常的網絡搭建即可


(注意一小點,在恢復某一個節點的時候,其實他前面與之關聯的節點都恢復了)


3.2.pb文件保存的網絡結構


        介紹如何用convert_variables_to_constants保存的pd文件恢復網絡

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

from tensorflow.examples.tutorials.mnist import input_data
from datetime import datetime
from tensorflow.python.framework.graph_util import convert_variables_to_constants
import os
mnist = input_data.read_data_sets('MNIST_data',one_hot=True)

import tensorflow as tf

FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_integer('batch_size',100,'''batch_size''')
tf.app.flags.DEFINE_integer('traing_epoches',15,'''epoch''')
tf.app.flags.DEFINE_string('check_point_dir','./','check_ponint_dir')
def _bias_variable(name,shape,initializer):
    var = tf.get_variable(name, shape, initializer=initializer, dtype=tf.float32)
    return var
def _weight_variable(name,shape,std):
    return _bias_variable(name, shape, initializer=tf.truncated_normal_initializer(stddev=std,dtype=tf.float32),
                          )
def inference(x):
    with tf.variable_scope('layer1') as scope:
        weights = _weight_variable('weights',[784,256],0.04)
        bias = _bias_variable('bias',[256],tf.constant_initializer(0.1))
        layer1 = tf.nn.relu(tf.matmul(x,weights)+bias,name=scope.name)
    with tf.variable_scope('layer2') as scope:
        weights = _weight_variable('weights',[256,128],std=0.02)
        bias = _bias_variable('bias',[128],tf.constant_initializer(0.2))
        layer2 = tf.nn.relu(tf.matmul(layer1,weights)+bias,name=scope.name)
    with tf.variable_scope('softmax_linear') as scope:
        weights = _weight_variable('weights',[128,10],std=1/192.0)
        bias = _bias_variable('bias',[10],tf.constant_initializer(0.0))
        softmax_linear = tf.add(tf.matmul(layer2,weights),bias,name=scope.name)
    return softmax_linear

def loss(logits,labels):
    print(labels.get_shape().as_list())
    print(logits.get_shape().as_list())
    labels = tf.cast(labels,tf.int64)
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=tf.argmax(labels,1),logits=logits,name = 'cross_entropy')
    cross_entropy_mean  = tf.reduce_mean(cross_entropy,name = 'cross_entropy')
    return cross_entropy_mean

def train():
    with tf.name_scope("input"):
        x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
        y = tf.placeholder(tf.float32, shape=[None, 10], name='y')
    softmax_linear = inference(x)
    cost = loss(softmax_linear,y)
    opt = tf.train.AdamOptimizer(0.001).minimize(cost)
    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(softmax_linear, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
    #saver = tf.train.Saver()
    with tf.Session() as sess:
        print(y)
        sess.run(tf.global_variables_initializer())
        for epoch in range(FLAGS.traing_epoches):
            avg_cost = 0.0
            total_batch = int(mnist.train.num_examples/FLAGS.batch_size)
            for _ in range(total_batch):
                batch_xs,batch_ys = mnist.train.next_batch(FLAGS.batch_size)
                sess.run(opt,feed_dict={x:batch_xs,y:batch_ys})
                cost_ = sess.run(cost,feed_dict={x:batch_xs,y:batch_ys})
            print(("%s epoch: %d,cost: %.6f")%(datetime.now(),epoch+1,cost_))
            '''
            if (epoch+1) % 5 == 0:
                check_point_file = os.path.join(FLAGS.check_point_dir,'my_test_model')
                saver.save(sess,check_point_file,global_step=epoch+1)
            '''
        graph = convert_variables_to_constants(sess,sess.graph_def,['layer2/layer2'])
        tf.train.write_graph(graph,'.','graph.pb',as_text=False)
        mean_accuary = sess.run(accuracy,{x:mnist.test.images,y:mnist.test.labels})
        print("accuracy %3.f"%mean_accuary)
    print()

def main(_):
   train()


if __name__ == '__main__':
  tf.app.run()

保存網絡和weights很簡單,使用

 graph = convert_variables_to_constants(sess,sess.graph_def,['layer2/layer2'])
        tf.train.write_graph(graph,'.','graph.pb',as_text=False)

就可以保存了, 注意和ckpt文件的區別,由於name = "layer2/layer2"節點,因此pd文件中保存了該節點和與該節點之前所有相關的節點的相關操作,並且weights是以常量的形式進行保存的


恢復網絡,進行finetuning的時候

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

from tensorflow.examples.tutorials.mnist import input_data
from datetime import datetime
import os

mnist = input_data.read_data_sets('MNIST_data',one_hot=True)
import tensorflow as tf
def _bias_variable(name,shape,initializer):
    var = tf.get_variable(name, shape, initializer=initializer, dtype=tf.float32)
    return var
def _weight_variable(name,shape,std):
    return _bias_variable(name, shape, initializer=tf.truncated_normal_initializer(stddev=std,dtype=tf.float32),
                          )
def inference(input):
    with tf.variable_scope('layer3') as scope:
        weights = _weight_variable('weights',[128,64],std=0.001)
        bias = _bias_variable('bias',[64],tf.constant_initializer(0.0))
        layer3 = tf.nn.relu(tf.matmul(input, weights) + bias, name=scope.name)
    with tf.variable_scope('softmax_linear') as scope:
        weights = _weight_variable('weights', [64, 10], std=1 / 192.0)
        bias = _bias_variable('bias', [10], tf.constant_initializer(0.0))
        softmax_linear = tf.add(tf.matmul(layer3, weights), bias, name=scope.name)
    return softmax_linear

def loss(logits,labels):
    labels = tf.cast(labels,tf.int64)
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=tf.argmax(labels,1),logits=logits,name = 'cross_entropy')
    cross_entropy_mean  = tf.reduce_mean(cross_entropy,name = 'cross_entropy')
    return cross_entropy_mean


batch_size = 100
training_epoch = 20
with tf.Graph().as_default() as g:
    x_place = tf.placeholder(tf.float32, shape=[None, 784], name='x')
    y_place = tf.placeholder(tf.float32, shape=[None, 10], name='y')
    with open('./graph.pb','rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        tf.import_graph_def(graph_def, name='')
        graph_op = tf.import_graph_def(graph_def,name='',input_map={'input/x:0':x_place},
                                       return_elements=['layer2/layer2:0','layer1/weights:0'])

   # x_place = g.get_tensor_by_name('input/x:0')
    #y_place = g.get_tensor_by_name('input/y:0')
    #layer2 = g.get_tensor_by_name('layer2/layer2:0')
    #weight_test = g.get_tensor_by_name('layer1/weights:0')
    #layer2 = g.get_tensor_by_name('layer2/layer2:0')
    #layer2 = tf.stop_gradient(layer2,name='stop_gradient')
    layer2 = graph_op[0]
    weight_test = graph_op[1]
    soft_result = inference(layer2)
    cost = loss(soft_result,y_place)
    opt = tf.train.AdamOptimizer(0.001).minimize(cost)
    correct_prediction = tf.equal(tf.argmax(y_place, 1), tf.argmax(soft_result, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
with tf.Session(graph=g) as sess:
    value=[]
    #saver.restore(sess, tf.train.latest_checkpoint('./'))
    sess.run(tf.global_variables_initializer())
    #weight_test = sess.g.get_tensor_by_name('layer1/weights:0')
    for epoch in range(training_epoch):
        avg_cost = 0.0
        total_batch = int(mnist.train.num_examples / batch_size)
        for _ in range(total_batch):
            batch_xs, batch_ys = mnist.train.next_batch(batch_size)
            sess.run(opt, feed_dict={x_place: batch_xs, y_place: batch_ys})
            cost_ = sess.run(cost, feed_dict={x_place: batch_xs, y_place: batch_ys})
            weight_test_value = sess.run(weight_test,feed_dict={x_place: batch_xs, y_place: batch_ys})
        print(("%s epoch: %d,cost: %.6f") % (datetime.now(), epoch + 1, cost_))

        if (epoch+1) % 5 == 0:
            value.append(weight_test_value)
    for i in range(len(value)-1):
        if value[i].all()==value[i+1].all():
            print("weight is equal")
    mean_accuary = sess.run(accuracy, {x_place: mnist.test.images, y_place: mnist.test.labels})
    print("accuracy %3.f" % mean_accuary)

在恢復網絡的時候用到


黃色代碼部分是我用來測試第二層網絡中的weights經過多次迭代後是否固定(不變)

with open('./graph.pb','rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        tf.import_graph_def(graph_def, name='')
        graph_op = tf.import_graph_def(graph_def,name='',input_map={'input/x:0':x_place},
                                       return_elements=['layer2/layer2:0','layer1/weights:0'])

注意由於pb文件保存的都是常量,不需要進行tf.stop_gradient, 而且保存下來的只有輸入,也即是隻有‘input/x:0’,正好我們將重新定義的輸入輸進去即可得到對應節點,並沒有保存‘input/y:0', 後面的網絡自己重新搭建即      

 graph_op = tf.import_graph_def(graph_def,name='',input_map={'input/x:0':x_place},
                                       return_elements=['layer2/layer2:0','layer1/weights:0'])

4.總結

        使用tf.train.Saver.save()會保存運行tensorflow程序所需要的全部信息(graph結構,變量值,檢查點列表信息),然而有時候並不需要上述所有信息,例如在測試或者離線預測時,其實我們只需要知道如何從神經網絡的輸入層經過前向傳播計算得到輸出層即可,而不需要類似變量初始化,模型保存等輔助節點的信息,另外,將變量取值和圖的結構分成不同的文件保存有時候也不方便,尤其是當我們需要將訓練好的model從一個平臺部署到另外一個平臺時,例如從PC端部署到android,解決這個問題分爲兩種情況:

①如果我們已經有了model的分開保存的文件,可以採用tensorflow安裝目錄./tensorflow/tensorflow/python/tools下的freeze_graph.py腳本提供的方法,將前面保存的cpkt文件和.pb文件(.pbtxt)或者.meta文件統一到一起生成一個單一的文件;

②如果想在保存model時將graph結構,變量值等保存爲一個統一的.pb文件,這主要用到tf.graph_util.convert_variables_to_constants()函數用相同值的常量替換圖中的所有變量,如果有一個包含變量操作的訓練圖,可以將它們全部轉換爲持有相同值的Const操作,這樣可以用一個GraphDef文件完全描述網絡,並允許刪除與加載和保存變量相關的大量操作。



發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章