常用損失函數

1. nn.L1Loss

取預測值和真實值的絕對誤差的平均數

注意數據類型爲:double或float

import torch.nn as nn 
import torch 
import numpy as np  
x=np.array([1,2,4]) 
label=torch.from_numpy(x).double() 
pre=torch.ones([3]).double() 
loss=nn.L1Loss() 
print(loss(label,pre))  #1.3333

2. nn.SmoothL1Loss

分段計算誤差

import torch.nn as nn
import torch
import numpy as np

x=np.array([1,2,4])
label=torch.from_numpy(x).double()
pre=torch.ones([3]).double()
loss=nn.SmoothL1Loss()
print(loss(label,pre))  #1

3. nn.MSELoss

預測值和真實值之間的平方和的平均數

import torch.nn as nn
import torch
import numpy as np

x=np.array([1,2,4])
label=torch.from_numpy(x).double()
pre=torch.ones([3]).double()
loss=nn.MSELoss()
print(loss(label,pre))  #3.3333

4. nn.NLLLoss()

負對數似然損失函數  Negative Log Likelihood

import torch.nn as nn
import torch
import numpy as np

x=np.ones([2,3])   #input is of size N x C = 2 x 3
x[0][1]=3
label=torch.from_numpy(x)
pre=torch.ones([2]).long()
loss=nn.NLLLoss()
print(loss(label,pre))  #-2

注意:輸入格式爲N*C; label類型爲long()

5. nn.CrossEntropyLoss()

交叉熵損失函數

import torch.nn as nn
import torch
import numpy as np

x=np.ones([2,3])   #input is of size N x C = 2 x 3
x[0][1]=3
label=torch.from_numpy(x)
pre=torch.ones([2]).long()
loss=nn.CrossEntropyLoss()
print(loss(label,pre))  #0.6691

手動計算:

注意:等價於nn.LogSoftmax() 和 nn.NLLLoss()的整合使用

 

注意: 上述代碼中label和pre變量名起反了

 

1. nn.LogSoftmax()

import torch.nn as nn
import torch
import numpy as np

x=np.ones([2,3])   #input is of size N x C = 2 x 3
x[0][1]=3
pre=torch.from_numpy(x)
m=nn.LogSoftmax()
lable=torch.ones([2]).long()
loss=nn.NLLLoss()
print(loss(m(pre),lable))

2. nn.Softmax()

import torch.nn as nn
import torch
import numpy as np

x=np.ones([2,3])   #input is of size N x C = 2 x 3
x[0][1]=3
pre=torch.from_numpy(x)
m=nn.Softmax()
print(m(pre))

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章