Tensorflow2.0之TensorBoard:训练过程可视化

使用 TensorBoard 流程

  • 1、建立文件夹存放 TensorBoard 的记录文件;
  • 2、实例化记录器;
  • 3、将参数(一般是标量)记录到指定的记录器中;
  • 4、访问 TensorBoard 的可视界面。

具体流程

Step 1

在代码目录下建立一个文件夹(如 ./tensorboard )。

Step 2

实例化记录器:

summary_writer = tf.summary.create_file_writer('./tensorboard')     # 参数为记录文件所保存的目录

Step 3

将参数(一般是标量)记录到指定的记录器中:

summary_writer = tf.summary.create_file_writer('./tensorboard')
# 开始模型训练
for batch_index in range(num_batches):
    # ...(训练代码,当前batch的损失值放入变量loss中)
    with summary_writer.as_default():                               # 希望使用的记录器
        tf.summary.scalar("loss", loss, step=batch_index)  # 还可以添加其他自定义的变量

每运行一次 tf.summary.scalar() ,记录器就会向记录文件中写入一条记录。

Step 4

当我们要对训练过程可视化时,在代码目录打开终端,运行:

tensorboard --logdir=E:\Pycharm\code\Jupyter\tensorflow2.0\My_net\Tensorboard\tensorboard --host=127.0.0.1

其中 ‘E:\Pycharm\code\Jupyter\tensorflow2.0\My_net\Tensorboard\tensorboard’ 是存放 TensorBoard 记录文件的文件夹路径。

然后使用浏览器访问命令行程序所输出的网址(一般是 http://127.0.0.1:6006/),即可访问 TensorBoard 的可视界面。

查看 Graph 和 Profile 信息

tf.summary.trace_on(graph=True, profiler=True)  # 开启Trace,可以记录图结构和profile信息
# 进行训练
with summary_writer.as_default():
    tf.summary.trace_export(name="model_trace", step=0, profiler_outdir=log_dir)    # 保存Trace信息到文件

之后,我们就可以在 TensorBoard 中选择 “Profile”,以时间轴的方式查看各操作的耗时情况。如果使用了 tf.function 建立了计算图,也可以点击 “Graphs” 查看图结构。

实例

此处用 MNIST 的训练过程举例:

1、定义模型及训练过程

import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.layers as layers

mnist = keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

# Add a channels dimension
x_train = x_train[..., tf.newaxis].astype(np.float32)
x_test = x_test[..., tf.newaxis].astype(np.float32)

train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(x_test.shape[0])

class MyModel(keras.Model):
    # Set layers.
    def __init__(self):
        super(MyModel, self).__init__()
        # Convolution Layer with 32 filters and a kernel size of 5.
        self.conv1 = layers.Conv2D(32, kernel_size=5, activation=tf.nn.relu)
        # Max Pooling (down-sampling) with kernel size of 2 and strides of 2.
        self.maxpool1 = layers.MaxPool2D(2, strides=2)

        # Convolution Layer with 64 filters and a kernel size of 3.
        self.conv2 = layers.Conv2D(64, kernel_size=3, activation=tf.nn.relu)
        # Max Pooling (down-sampling) with kernel size of 2 and strides of 2.
        self.maxpool2 = layers.MaxPool2D(2, strides=2)

        # Flatten the data to a 1-D vector for the fully connected layer.
        self.flatten = layers.Flatten()

        # Fully connected layer.
        self.fc1 = layers.Dense(1024)
        # Apply Dropout (if is_training is False, dropout is not applied).
        self.dropout = layers.Dropout(rate=0.5)

        # Output layer, class prediction.
        self.out = layers.Dense(10)

    # Set forward pass.
    def call(self, x, is_training=False):
        x = tf.reshape(x, [-1, 28, 28, 1])
        x = self.conv1(x)
        x = self.maxpool1(x)
        x = self.conv2(x)
        x = self.maxpool2(x)
        x = self.flatten(x)
        x = self.fc1(x)
        x = self.dropout(x, training=is_training)
        x = self.out(x)
        if not is_training:
            # tf cross entropy expect logits without softmax, so only
            # apply softmax when not training.
            x = tf.nn.softmax(x)
        return x

model = MyModel()

loss_object = keras.losses.SparseCategoricalCrossentropy()
optimizer = keras.optimizers.Adam()

@tf.function
def train_step(images, labels):
    with tf.GradientTape() as tape:
        predictions = model(images)
        loss = loss_object(labels, predictions)
        loss = tf.reduce_mean(loss)
    gradients = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    return loss

2、建立文件夹存放 TensorBoard 的记录文件

log_dir = 'tensorboard'

3、实例化记录器(开启Trace)

summary_writer = tf.summary.create_file_writer(log_dir)     # 实例化记录器
tf.summary.trace_on(profiler=True)  # 开启Trace(可选)

4、将参数记录到指定的记录器中

EPOCHS = 5

for epoch in range(EPOCHS):
    for images, labels in train_ds.take(10):
        loss = train_step(images, labels)
        with summary_writer.as_default():                           # 指定记录器
            tf.summary.scalar("loss", loss, step=epoch)       # 将当前损失函数的值写入记录器

with summary_writer.as_default():
    tf.summary.trace_export(name="model_trace", step=0, profiler_outdir=log_dir)    # 保存Trace信息到文件(可选)
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章