Tensorflow2.0之TensorBoard:訓練過程可視化

使用 TensorBoard 流程

  • 1、建立文件夾存放 TensorBoard 的記錄文件;
  • 2、實例化記錄器;
  • 3、將參數(一般是標量)記錄到指定的記錄器中;
  • 4、訪問 TensorBoard 的可視界面。

具體流程

Step 1

在代碼目錄下建立一個文件夾(如 ./tensorboard )。

Step 2

實例化記錄器:

summary_writer = tf.summary.create_file_writer('./tensorboard')     # 參數爲記錄文件所保存的目錄

Step 3

將參數(一般是標量)記錄到指定的記錄器中:

summary_writer = tf.summary.create_file_writer('./tensorboard')
# 開始模型訓練
for batch_index in range(num_batches):
    # ...(訓練代碼,當前batch的損失值放入變量loss中)
    with summary_writer.as_default():                               # 希望使用的記錄器
        tf.summary.scalar("loss", loss, step=batch_index)  # 還可以添加其他自定義的變量

每運行一次 tf.summary.scalar() ,記錄器就會向記錄文件中寫入一條記錄。

Step 4

當我們要對訓練過程可視化時,在代碼目錄打開終端,運行:

tensorboard --logdir=E:\Pycharm\code\Jupyter\tensorflow2.0\My_net\Tensorboard\tensorboard --host=127.0.0.1

其中 ‘E:\Pycharm\code\Jupyter\tensorflow2.0\My_net\Tensorboard\tensorboard’ 是存放 TensorBoard 記錄文件的文件夾路徑。

然後使用瀏覽器訪問命令行程序所輸出的網址(一般是 http://127.0.0.1:6006/),即可訪問 TensorBoard 的可視界面。

查看 Graph 和 Profile 信息

tf.summary.trace_on(graph=True, profiler=True)  # 開啓Trace,可以記錄圖結構和profile信息
# 進行訓練
with summary_writer.as_default():
    tf.summary.trace_export(name="model_trace", step=0, profiler_outdir=log_dir)    # 保存Trace信息到文件

之後,我們就可以在 TensorBoard 中選擇 “Profile”,以時間軸的方式查看各操作的耗時情況。如果使用了 tf.function 建立了計算圖,也可以點擊 “Graphs” 查看圖結構。

實例

此處用 MNIST 的訓練過程舉例:

1、定義模型及訓練過程

import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.layers as layers

mnist = keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

# Add a channels dimension
x_train = x_train[..., tf.newaxis].astype(np.float32)
x_test = x_test[..., tf.newaxis].astype(np.float32)

train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(x_test.shape[0])

class MyModel(keras.Model):
    # Set layers.
    def __init__(self):
        super(MyModel, self).__init__()
        # Convolution Layer with 32 filters and a kernel size of 5.
        self.conv1 = layers.Conv2D(32, kernel_size=5, activation=tf.nn.relu)
        # Max Pooling (down-sampling) with kernel size of 2 and strides of 2.
        self.maxpool1 = layers.MaxPool2D(2, strides=2)

        # Convolution Layer with 64 filters and a kernel size of 3.
        self.conv2 = layers.Conv2D(64, kernel_size=3, activation=tf.nn.relu)
        # Max Pooling (down-sampling) with kernel size of 2 and strides of 2.
        self.maxpool2 = layers.MaxPool2D(2, strides=2)

        # Flatten the data to a 1-D vector for the fully connected layer.
        self.flatten = layers.Flatten()

        # Fully connected layer.
        self.fc1 = layers.Dense(1024)
        # Apply Dropout (if is_training is False, dropout is not applied).
        self.dropout = layers.Dropout(rate=0.5)

        # Output layer, class prediction.
        self.out = layers.Dense(10)

    # Set forward pass.
    def call(self, x, is_training=False):
        x = tf.reshape(x, [-1, 28, 28, 1])
        x = self.conv1(x)
        x = self.maxpool1(x)
        x = self.conv2(x)
        x = self.maxpool2(x)
        x = self.flatten(x)
        x = self.fc1(x)
        x = self.dropout(x, training=is_training)
        x = self.out(x)
        if not is_training:
            # tf cross entropy expect logits without softmax, so only
            # apply softmax when not training.
            x = tf.nn.softmax(x)
        return x

model = MyModel()

loss_object = keras.losses.SparseCategoricalCrossentropy()
optimizer = keras.optimizers.Adam()

@tf.function
def train_step(images, labels):
    with tf.GradientTape() as tape:
        predictions = model(images)
        loss = loss_object(labels, predictions)
        loss = tf.reduce_mean(loss)
    gradients = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    return loss

2、建立文件夾存放 TensorBoard 的記錄文件

log_dir = 'tensorboard'

3、實例化記錄器(開啓Trace)

summary_writer = tf.summary.create_file_writer(log_dir)     # 實例化記錄器
tf.summary.trace_on(profiler=True)  # 開啓Trace(可選)

4、將參數記錄到指定的記錄器中

EPOCHS = 5

for epoch in range(EPOCHS):
    for images, labels in train_ds.take(10):
        loss = train_step(images, labels)
        with summary_writer.as_default():                           # 指定記錄器
            tf.summary.scalar("loss", loss, step=epoch)       # 將當前損失函數的值寫入記錄器

with summary_writer.as_default():
    tf.summary.trace_export(name="model_trace", step=0, profiler_outdir=log_dir)    # 保存Trace信息到文件(可選)
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章