TensorBoard菜鳥教程(包含TFlearn例子)

目錄

1. 簡介

2. TnesorBoard啓動

3.代碼解釋

4.補充例子


1. 簡介

網上關於TensorBoard有很多介紹,但作爲一名小白很難操作起來,實現過程中困難重重。本文章從實例解析tensorboard的使用方法。其他文字方面的介紹(如TensorBoard是什麼、TensorBoard的作用)可參考大神們的博客。以下代碼轉自http://www.jianshu.com/p/61081bba175f 。已運行通過(python3)

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/MNIST_data", one_hot=True)

# Input placeholder, 2-D tensor of floating-point nunbers.
# here None means that a dimension can be of any length.
X = tf.placeholder(tf.float32, [None, 784], name = 'X-input')

# New placeholder to input the correct answers.
Y = tf.placeholder(tf.float32, [None, 10], name = 'Y-input')

# Initialize both W and b as tensors full of zeros.
# Since we are going to learn W and b, it doesn't matter very much what they initial are.
W = tf.Variable(tf.zeros([784, 10]), name = 'Weight')
B = tf.Variable(tf.zeros([10]), name = 'Bias')

# Tensorboard histogram summary.
tf.summary.histogram('WeightSM', W)
tf.summary.histogram('BiasSM', B)

with tf.name_scope('Layer'):
    y = tf.nn.softmax(tf.matmul(X, W) + B)

with tf.name_scope('Cost'):
    cross_entropy = tf.reduce_mean(-tf.reduce_sum(Y * tf.log(y), reduction_indices=[1]))
    # Tensorboard scalar summary.
    tf.summary.scalar('Cost', cross_entropy)

with tf.name_scope('Train'):
    train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

with tf.name_scope('Accuracy'):
    accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(y, 1), tf.argmax(Y, 1)), tf.float32))
    # Tensorboard scalar summary.
    tf.summary.scalar('Accuracy', accuracy)

with tf.Session() as sess:
    # Merge all summaries.
    writer=tf.summary.FileWriter('./logs',sess.graph)
    merged = tf.summary.merge_all()
    tf.initialize_all_variables().run()
    # Training 1000 times, 100 for each loop.

    for i in range(1000):
        batch_xs, batch_ys = mnist.train.next_batch(100)
        _, summary = sess.run([train_step, merged], feed_dict={X: batch_xs, Y: batch_ys})
        # Write summary into files.
        writer.add_summary(summary, i)

    # Close summary writer.
    writer.close()

    print('Accuracy', accuracy.eval({X: mnist.test.images, Y: mnist.test.labels}))

 


2. TnesorBoard啓動

TensorBoard首先要求python程序生成運行結果,保存到本地,然後TensorBoard才能讀取從而分析。第39行代碼指定了生成數據的路徑:

writer=tf.summary.FileWriter('./logs',sess.graph)

執行上述代碼之後(注意使用的是python3),會在./logs目錄下生成一個events文件

 

通過以下步驟啓動TensorBoard:

(1) 在命令行中轉到Python項目所在路徑

(2) 輸入命令  tensorboard --logdir=logs  其中logs是我剛纔設置的路徑'./logs'。此時會生成一個地址,如下圖所示,把該網址複製到瀏覽器中便可打開tensorboard。這個網址其實就是http://127.0.0.1:6006/,可添加到書籤方便下次使用

 


3.代碼解釋

通過TensorBoard頁面的GRAPHS可以看到計算圖的結構.

 

文章開頭的第5~15行代碼聲明瞭變X, Y, Weight, Bias,其中name參數的內容就是圖中顯示的名稱

 

# Input placeholder, 2-D tensor of floating-point nunbers.
# here None means that a dimension can be of any length.
X = tf.placeholder(tf.float32, [None, 784], name = 'X-input')

# New placeholder to input the correct answers.
Y = tf.placeholder(tf.float32, [None, 10], name = 'Y-input')

# Initialize both W and b as tensors full of zeros.
# Since we are going to learn W and b, it doesn't matter very much what they initial are.
W = tf.Variable(tf.zeros([784, 10]), name = 'Weight')
B = tf.Variable(tf.zeros([10]), name = 'Bias')

 

第17~19行指明瞭要統計的數據。

 

# Tensorboard histogram summary.
tf.summary.histogram('WeightSM', W)
tf.summary.histogram('BiasSM', B)
可以看到現在要統計的是histogram數據,在TensorBoard中打開HISTOGRAM即可看到這2個變量

 

第21~22行把乘法操作matmul、加法操作+、softmax函數封裝成一個單元了,並命名爲layer

with tf.name_scope('Layer'):
    y = tf.nn.softmax(tf.matmul(X, W) + B)

通過下圖對比即可看出with tf.name_scope('Layer') 的作用:

可以看到封裝使得結構圖更加清晰簡潔。雙擊封裝後的layer,可以看到內部擁有和左圖一樣的結構

 

第23~35行同樣是封裝操作,在此不作贅述:

with tf.name_scope('Cost'):
    cross_entropy = tf.reduce_mean(-tf.reduce_sum(Y * tf.log(y), reduction_indices=[1]))
    # Tensorboard scalar summary.
    tf.summary.scalar('Cost', cross_entropy)

with tf.name_scope('Train'):
    train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

with tf.name_scope('Accuracy'):
    accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(y, 1), tf.argmax(Y, 1)), tf.float32))
    # Tensorboard scalar summary.
    tf.summary.scalar('Accuracy', accuracy)

觀察最核心部分的代碼:

with tf.Session() as sess:
    # Merge all summaries.
    writer=tf.summary.FileWriter('./logs',sess.graph)
    merged = tf.summary.merge_all()
    tf.initialize_all_variables().run()
    # Training 1000 times, 100 for each loop.

    for i in range(1000):
        batch_xs, batch_ys = mnist.train.next_batch(100)
        _, summary = sess.run([train_step, merged], feed_dict={X: batch_xs, Y: batch_ys})
        # Write summary into files.
        writer.add_summary(summary, i)

    # Close summary writer.
    writer.close()

    print('Accuracy', accuracy.eval({X: mnist.test.images, Y: mnist.test.labels}))

計算圖必須通過一個session啓動,所以開頭的地方新建了一個計算圖。

writer=tf.summary.FileWriter('./logs',sess.graph)  指定了summary的保存文件,並且writer可以看作爲一個文件指針。

merged = tf.summary.merge_all()  把所有的summary歸併爲一個算子,所以只需要執行merged這個節點,就能得到前面創建的所有summary

tf.initialize_all_variables().run()  初始化了所有變量,例如前面提到的weight和bias

接着是迭代訓練過程,在每一次迭代過程中,通過執行merged算子來得到當前迭代輪次的summary,通過summary的文件指針writer把summary寫入到本地

 


4.補充例子

補充一個TFlearn的例子,在TFlearn庫中生成events文件的方法是在tflearn.DNN()的參數中加入語句tensorboard_dir='目錄',例如:

 

model = tflearn.DNN(net,checkpoint_path='model_resnet_mnist',max_checkpoints=10,tensorboard_verbose=0,tensorboard_dir='logs')

ResNet處理mnist數據集的TFlearn例子如下:

 

import tflearn
import  tflearn.data_utils as du

# 加載數據並做預處理
import tflearn.datasets.mnist as mnist
X,Y,testX,testY=mnist.load_data(one_hot=True)
X=X.reshape([-1,28,28,1])
testX=testX.reshape([-1,28,28,1])
X,mean=du.featurewise_zero_center(X)
testX=du.featurewise_zero_center(testX,mean)

# 構建殘差網絡模型
net=tflearn.input_data(shape=[None,28,28,1])
net=tflearn.conv_2d(net,64,3,activation='relu',bias=False)

# 使用Bottleneck結構構建殘差塊
net=tflearn.residual_bottleneck(net,3,16,64)
net=tflearn.residual_bottleneck(net,1,32,128,downsample=True)
net=tflearn.residual_bottleneck(net,2,32,128)
net=tflearn.residual_bottleneck(net,1,64,256,downsample=True)
net=tflearn.residual_bottleneck(net,2,64,256)
net=tflearn.batch_normalization(net)
net=tflearn.activation(net,'relu')
net=tflearn.global_avg_pool(net)
net=tflearn.fully_connected(net,10,activation='softmax')

# 聲明優化算法、損失函數、學習率等
net=tflearn.regression(net,optimizer='momentum',loss='categorical_crossentropy',learning_rate=0.1)


# 訓練
model = tflearn.DNN(net,checkpoint_path='model_resnet_mnist',max_checkpoints=10,tensorboard_verbose=0,tensorboard_dir='logs')
model.fit(X,Y,n_epoch=1,validation_set=(testX,testY),show_metric=True,batch_size=256,run_id='resnet_minst')

 

 

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章