1. nn.MSELoss(reduce=True, size_average=True)
均方損失函數(mean squared error),用於計算預測值和實際值的差別。公式爲
其中,前面的y表示的是真實值,後面的y表示的是預測值。
函數裏面有兩個參數,當不設置參數值的時候默認 reduce=True, size_average=True,表示返回一個標量均值。
#創建兩個tensor型數據
a=torch.tensor([[1,2],[3,4]])
b=torch.tensor([[2,3],[4,4]])
#定義函數並將它賦值給變量loss_fn,效果上等同於loss_fn = torch.nn.MSELoss(reduce=True, size_average=True)
loss_fn = torch.nn.MSELoss(reduce=True, size_average=True)
#將a作爲輸入值
input = torch.autograd.Variable(a)
#b作爲真實值
target = torch.autograd.Variable(b)
#計算損失
loss = loss_fn(input.float(), target.float())
print(loss)
輸出 tensor(0.7500)
2. nn.Linear()
x = torch.randn(128, 20) # 輸入一個120*20的二維矩陣
m = torch.nn.Linear(20, 30) # 權重,一個30*20的矩陣,需要轉置之後被x乘
output = m(x) #輸出
print('m.weight.shape:\n ', m.weight.shape)
print('m.bias.shape:\n', m.bias.shape)
print('output.shape:\n', output.shape)
m.weight.shape:
torch.Size([30, 20])
m.bias.shape:
torch.Size([30])
output.shape:
torch.Size([128, 30])
References: