nlp中的基本网络

nlp中的文本,基本都可以表示为[ batch, seq, embed_dim] 的形式

  1. CNN
    一般使用一维卷积,因为一维卷积变换的是最后一个维度,所以变换文本形状为 [batch, embed_dim, seq]。
# 一维卷积是在最后一个维度进行
m = nn.Conv1d(in_channels=16, out_channels=33, kernel_size=3, stride=2)
input = torch.randn(20, 16, 50)  # [batch, seq, hidden_in]需要转为[batch, hidden_in, seq]
output = m(input)  # [20, 33, 24] # [batch, out_channels, hidden_out]

Lout=Lin+2padding+(kernel_size1)+1stride+1L{out} = \frac{L{in} +2*padding + (kernel\_size-1) + -1}{stride}+1
输入维度:[ batch, in_channels, seq ]
输出维度:[ batch, out_channels, L_out ]
二维卷积与一维卷积类似,只是变换维度为两个维度。

# 二维的卷积
m = nn.Conv2d(in_channels=16, out_channels=33, kernel_size=3, stride=2)
# non-square kernels and unequal stride and with padding
# m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
# non-square kernels and unequal stride and with padding and dilation
# m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))
input = torch.randn(20, 16, 50, 100)
output = m(input) # [20, 33, 24, 49]
# 三维卷积
# With square kernels and equal stride
m = nn.Conv3d(in_channels=16, out_channels=33, kernel_size=3, stride=2)
# # non-square kernels and unequal stride and with padding
# m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(4, 2, 0))
input =torch.randn(20, 16, 10, 50, 100)
output = m(input)  # [20, 33, 4, 24, 49]
  1. RNN
    输入维度:[ seq, batch, input_size]
    h0: [num_layersnum_directions, batch, hidden],可以不加
    输出维度:
    output: [seq, batch, hidden]
    hn: [num_layers
    num_directions, batch, hidden]
# rnn
rnn = nn.RNN(input_size=10, hidden_size=20, num_layers=2, bidirectional=False)
input = torch.randn(5, 3, 10)  # [time_step, batch, feature] = [seq, batch, input_size]
h0 = torch.randn(4, 3, 20)     # [num_layers*num_directions, batch, hidden]
output, hn = rnn(input)
# output: [seq, batch, hidden]   [5, 3, 20]  双向乘2
# hn: [num_layers*num_directions, batch, hidden]   [2, 3, 20]
# num_direction: 计算方向,双向=2,单向=1
# batch_first:Ture时,输入维度为[batch, seq, input_size]. 默认False,输入维度为[seq, batch, input_size]
# batch_first=False时, 若bidirectional=False,output[-1]=hn[-1],若为True时,两者不相等
  1. LSTM
lstm = nn.LSTM(input_size=10, hidden_size=20, num_layers=2)
input = torch.randn(5, 3, 10) # [time_step, batch, feature] = [seq, batch, input_size]
# h0与c0可以不进输入
h0 = torch.randn(2, 3, 20) # [num_layers*num_directions, batch, hidden]
c0 = torch.randn(2, 3, 20) # [num_layers*num_directions, batch, hidden]
# 这里有2层lstm,output是最后一层lstm的每个词向量对应隐藏层的输出,其与层数无关,只与序列长度相关
output, (hn, cn) = lstm(input, (h0, c0))
# output: [seq, batch, hidden_size]  [5, 3, 20]
# hn: [num_layers*num_directions, batch, hidden] [2, 3, 20]
# cn: [num_layers*num_directions, batch, hidden] [2, 3, 20]
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章