PyTorch documentation 學習與使用---持續更新

參考:https://pytorch.org/docs

將在項目中用到的模塊總結如下:

(1) torch.nn.AvgPool1d:在時間軸進行avgpooling操作,將一定長度的sequence信息聚合.

注意,輸入的維度順序爲: input size (N, C, L)

import torch
import torch.nn as nn
input_tensor = torch.randn([16, 32, 4096])  # [batch_size=16, time_steps=32, features=4096]
avgpool = nn.AvgPool1d(kernel_size=32) # 設置kernel_size=32,目的是試得輸出維度爲1
mid_tensor = input_tensor.transpose(2, 1) # (16, 32, 4096) --> (16, 4096, 32)
output = avgpool(mid_tensor)  # output size: [16, 4096, 1]

(2) torch.nn.AdaptiveAvgPool1d(output_size)

直接設置輸出的尺寸,但仍要注意輸入的維度順序,避免出錯.

import torch
import torch.nn as nn

avgpool = nn.AdaptiveAvgPool1d(output_size=1) # 設置輸出維度爲1
input_tensor = torch.randn([16, 32, 4096]) # [batch_size=16, time_step=32, features=4096]
mid_tensor = input_tensor.transpose(2, 1) # [16, 32, 4096] --> [16, 4096, 32]
output = avgpool(mid_tensor) # [16, 4096, 32] --> output size: [16, 4096, 1]

(3) torch.optim.lr_schedule

# 設置lr在epoch5, 8, 10分別以gamma因子降低一次(*0.1, 0.1^2, 0.1^3)
torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[5, 8, 10], gamma=0.1, last_epoch=-1)
# decay lr every step_size(5) by a factor of gamma(0.1)
torch.optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1, last_epoch=-1)

 

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章