學習筆記|Pytorch使用教程20(TensorBoard使用(一))

學習筆記|Pytorch使用教程20

本學習筆記主要摘自“深度之眼”,做一個總結,方便查閱。
使用Pytorch版本爲1.2

  • SummaryWriter
  • add_scalar and add_histogram
  • 模型指標監控
    在這裏插入圖片描述

一.SummaryWriter

SummaryWriter
功能:提供創建event file的高級接口
在這裏插入圖片描述
主要屬性:

  • log_dir : event file輸出文件夾
  • comment:不指定log_ dir時,文件夾後綴
  • filename_suffix : event file文件名後綴

測試代碼:

import numpy as np
import matplotlib.pyplot as plt
from torch.utils.tensorboard import SummaryWriter
from tools.common_tools import set_seed

set_seed(1)  # 設置隨機種子

# ----------------------------------- 0 SummaryWriter -----------------------------------
# flag = 0
flag = 1
if flag:

    log_dir = "./train_log/test_log_dir"
    writer = SummaryWriter(log_dir=log_dir, comment='_scalars', filename_suffix="12345678")
    #writer = SummaryWriter(comment='_scalars', filename_suffix="12345678")

    for x in range(100):
        writer.add_scalar('y=pow_2_x', 2 ** x, x)

    writer.close()

在當前文件夾下出現新的文件:
在這裏插入圖片描述
查看後綴和設置一致。
在這裏插入圖片描述
發現:comment='_scalars'並沒有出現在當前文件夾下,這是因爲設置了log_dir=log_dircomment不會起作用。如果要使用comment,則設置writer = SummaryWriter(comment='_scalars', filename_suffix="12345678")
這時:
在這裏插入圖片描述

二.add_scalar and add_histogram

1 add_scalar()
功能:記錄標量
在這裏插入圖片描述

  • tag :圖像的標籤名,圖的唯一標識
  • scalar_value :要記錄的標量
  • global_step :x軸

2.add_scalars()
在這裏插入圖片描述

  • main_tag :該圖的標籤
  • tag_scalar_dict : key是變量的tag, value是變量的值

測試代碼:

# ----------------------------------- 1 scalar and scalars -----------------------------------
# flag = 0
flag = 1
if flag:

    max_epoch = 100

    writer = SummaryWriter(comment='test_comment', filename_suffix="test_suffix")

    for x in range(max_epoch):

        writer.add_scalar('y=2x', x * 2, x)
        writer.add_scalar('y=pow_2_x', 2 ** x, x)

        writer.add_scalars('data/scalar_group', {"xsinx": x * np.sin(x),
                                                 "xcosx": x * np.cos(x)}, x)

    writer.close()

輸出:
在這裏插入圖片描述
在當前輸入:tensorboard --logdir=./runs
在這裏插入圖片描述
在瀏覽器中打開網址:http://localhost:6006/
在這裏插入圖片描述
其中:writer.add_scalars('data/scalar_group', {"xsinx": x * np.sin(x), "xcosx": x * np.cos(x)}, x)
在這裏插入圖片描述
其中:writer.add_scalar('y=2x', x * 2, x)

在這裏插入圖片描述
其中:writer.add_scalar('y=pow_2_x', 2 ** x, x)
在這裏插入圖片描述
3.add_histogram()
功能:統計直方圖與多分位數折線圖
在這裏插入圖片描述

  • tag:圖像的標籤名,圖的唯一標識
  • values :要統計的參數
  • global_step : y軸
  • bins:取直方圖的bins

測試代碼:

# ----------------------------------- 2 histogram -----------------------------------
# flag = 0
flag = 1
if flag:

    writer = SummaryWriter(comment='test_comment', filename_suffix="test_suffix")

    for x in range(2):

        np.random.seed(x)

        data_union = np.arange(100)
        data_normal = np.random.normal(size=1000)

        writer.add_histogram('distribution union', data_union, x)
        writer.add_histogram('distribution normal', data_normal, x)

        plt.subplot(121).hist(data_union, label="union")
        plt.subplot(122).hist(data_normal, label="normal")
        plt.legend()
        plt.show()

    writer.close()

輸出 :
在這裏插入圖片描述在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述

三.模型指標監控

使用上述方法對模型進行監控。
完整代碼:

import os
import numpy as np
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
import torchvision.transforms as transforms
from torch.utils.tensorboard import SummaryWriter
import torch.optim as optim
from matplotlib import pyplot as plt
from model.lenet import LeNet
from tools.my_dataset import RMBDataset
from tools.common_tools import set_seed

set_seed()  # 設置隨機種子
rmb_label = {"1": 0, "100": 1}

# 參數設置
MAX_EPOCH = 10
BATCH_SIZE = 16
LR = 0.01
log_interval = 10
val_interval = 1

# ============================ step 1/5 數據 ============================

split_dir = os.path.join("..", "..", "data", "rmb_split")
train_dir = os.path.join(split_dir, "train")
valid_dir = os.path.join(split_dir, "valid")

norm_mean = [0.485, 0.456, 0.406]
norm_std = [0.229, 0.224, 0.225]

train_transform = transforms.Compose([
    transforms.Resize((32, 32)),
    transforms.RandomCrop(32, padding=4),
    transforms.RandomGrayscale(p=0.8),
    transforms.ToTensor(),
    transforms.Normalize(norm_mean, norm_std),
])

valid_transform = transforms.Compose([
    transforms.Resize((32, 32)),
    transforms.ToTensor(),
    transforms.Normalize(norm_mean, norm_std),
])

# 構建MyDataset實例
train_data = RMBDataset(data_dir=train_dir, transform=train_transform)
valid_data = RMBDataset(data_dir=valid_dir, transform=valid_transform)

# 構建DataLoder
train_loader = DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)
valid_loader = DataLoader(dataset=valid_data, batch_size=BATCH_SIZE)

# ============================ step 2/5 模型 ============================

net = LeNet(classes=2)
net.initialize_weights()

# ============================ step 3/5 損失函數 ============================
criterion = nn.CrossEntropyLoss()                                                   # 選擇損失函數

# ============================ step 4/5 優化器 ============================
optimizer = optim.SGD(net.parameters(), lr=LR, momentum=0.9)                        # 選擇優化器
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.1)     # 設置學習率下降策略

# ============================ step 5/5 訓練 ============================
train_curve = list()
valid_curve = list()

iter_count = 0

# 構建 SummaryWriter
writer = SummaryWriter(comment='test_your_comment', filename_suffix="_test_your_filename_suffix")

for epoch in range(MAX_EPOCH):

    loss_mean = 0.
    correct = 0.
    total = 0.

    net.train()
    for i, data in enumerate(train_loader):

        iter_count += 1

        # forward
        inputs, labels = data
        outputs = net(inputs)

        # backward
        optimizer.zero_grad()
        loss = criterion(outputs, labels)
        loss.backward()

        # update weights
        optimizer.step()

        # 統計分類情況
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).squeeze().sum().numpy()

        # 打印訓練信息
        loss_mean += loss.item()
        train_curve.append(loss.item())
        if (i+1) % log_interval == 0:
            loss_mean = loss_mean / log_interval
            print("Training:Epoch[{:0>3}/{:0>3}] Iteration[{:0>3}/{:0>3}] Loss: {:.4f} Acc:{:.2%}".format(
                epoch, MAX_EPOCH, i+1, len(train_loader), loss_mean, correct / total))
            loss_mean = 0.

        # 記錄數據,保存於event file
        writer.add_scalars("Loss", {"Train": loss.item()}, iter_count)
        writer.add_scalars("Accuracy", {"Train": correct / total}, iter_count)

    # 每個epoch,記錄梯度,權值
    for name, param in net.named_parameters():
        writer.add_histogram(name + '_grad', param.grad, epoch)
        writer.add_histogram(name + '_data', param, epoch)

    scheduler.step()  # 更新學習率

    # validate the model
    if (epoch+1) % val_interval == 0:

        correct_val = 0.
        total_val = 0.
        loss_val = 0.
        net.eval()
        with torch.no_grad():
            for j, data in enumerate(valid_loader):
                inputs, labels = data
                outputs = net(inputs)
                loss = criterion(outputs, labels)

                _, predicted = torch.max(outputs.data, 1)
                total_val += labels.size(0)
                correct_val += (predicted == labels).squeeze().sum().numpy()

                loss_val += loss.item()

            valid_curve.append(loss.item())
            print("Valid:\t Epoch[{:0>3}/{:0>3}] Iteration[{:0>3}/{:0>3}] Loss: {:.4f} Acc:{:.2%}".format(
                epoch, MAX_EPOCH, j+1, len(valid_loader), loss_val, correct / total))

            # 記錄數據,保存於event file
            writer.add_scalars("Loss", {"Valid": np.mean(valid_curve)}, iter_count)
            writer.add_scalars("Accuracy", {"Valid": correct_val / total_val}, iter_count)

train_x = range(len(train_curve))
train_y = train_curve

train_iters = len(train_loader)
valid_x = np.arange(1, len(valid_curve)+1) * train_iters*val_interval # 由於valid中記錄的是epochloss,需要對記錄點進行轉換到iterations
valid_y = valid_curve

plt.plot(train_x, train_y, label='Train')
plt.plot(valid_x, valid_y, label='Valid')

plt.legend(loc='upper right')
plt.ylabel('loss value')
plt.xlabel('Iteration')
plt.show()

輸出:

Training:Epoch[000/010] Iteration[010/010] Loss: 0.6846 Acc:53.75%
Valid:   Epoch[000/010] Iteration[002/002] Loss: 0.9805 Acc:53.75%
Training:Epoch[001/010] Iteration[010/010] Loss: 0.4099 Acc:85.00%
Valid:   Epoch[001/010] Iteration[002/002] Loss: 0.0829 Acc:85.00%
Training:Epoch[002/010] Iteration[010/010] Loss: 0.1470 Acc:94.38%
Valid:   Epoch[002/010] Iteration[002/002] Loss: 0.0035 Acc:94.38%
Training:Epoch[003/010] Iteration[010/010] Loss: 0.4276 Acc:88.12%
Valid:   Epoch[003/010] Iteration[002/002] Loss: 0.2250 Acc:88.12%
Training:Epoch[004/010] Iteration[010/010] Loss: 0.3169 Acc:87.50%
Valid:   Epoch[004/010] Iteration[002/002] Loss: 0.1232 Acc:87.50%
Training:Epoch[005/010] Iteration[010/010] Loss: 0.2026 Acc:91.88%
Valid:   Epoch[005/010] Iteration[002/002] Loss: 0.0132 Acc:91.88%
Training:Epoch[006/010] Iteration[010/010] Loss: 0.1064 Acc:95.62%
Valid:   Epoch[006/010] Iteration[002/002] Loss: 0.0002 Acc:95.62%
Training:Epoch[007/010] Iteration[010/010] Loss: 0.0482 Acc:99.38%
Valid:   Epoch[007/010] Iteration[002/002] Loss: 0.0006 Acc:99.38%
Training:Epoch[008/010] Iteration[010/010] Loss: 0.0069 Acc:100.00%
Valid:   Epoch[008/010] Iteration[002/002] Loss: 0.0000 Acc:100.00%
Training:Epoch[009/010] Iteration[010/010] Loss: 0.0133 Acc:99.38%
Valid:   Epoch[009/010] Iteration[002/002] Loss: 0.0000 Acc:99.38%

在這裏插入圖片描述
可視化:

在這裏插入圖片描述
loss曲線一致。
在這裏插入圖片描述
當loss值比較小的時候,w的梯度比較小,全連接層也是如此:
在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章