pytorch學習九---損失函數

損失函數(一)

損失函數概念

損失函數是衡量模型輸出與真實標籤的差異

在我們討論損失函數時,經常會出現以下概念:損失函數(Loss Function)、代價函數(Cost Function)、目標函數(Objective Function)。這三者有什麼區別及聯繫呢?

Loss Function是計算一個樣本的差異,Loss = f(\hat{y},y)

代價函數是計算整個樣本集的差異的平均值:Cost = \frac{1}{N}\sum_{i}^{N}f(\hat{y_i},y_i)

目標函數是更廣泛的概念,通常目標函數包括cost和regularization,obj = Cost+Regularization

pytorch中Loss:

class _Loss(Module):
    def __init__(self,size_average=None,reduce=None,reduction="mean"):
        super(_Loss,self).__init__()
        if size_average is not None or reduce is not None:
            self.reduction = Reduction.legacy_get_string(size_average,reduce)
        else:
            self.reduction = reduction

Loss函數繼承了Module,相當於一個網絡層,它有三個參數,其中size_average與reduce參數即將被捨棄,他們的功能可以在reduction中實現。

 

交叉熵損失函數

功能:nn.LogSoftmax()與nn.NLLLoss()結合,進行交叉熵計算

nn.CrossEntropyLoss(weight=None,size_average=None,ignore_index=-100,reduce=None,reduction="mean")

主要參數:

  • weight:各類別的loss設置權值
  • ingnore_index:忽略某個類別
  • reduction:計算模式,可爲none/sum/mean
    • none:逐個元素計算
    • sum:所有元素求和,返回標量
    • mean:加權平均,返回標量

交叉熵 = 信息熵 + 相對熵

熵:用來描述一個事件的不確定性,一個事件越不確定,它的熵越大,比如明天下雨與明天太陽昇起的熵大很多,因爲明天是否下雨,不確定性很大,但不論明天下不下雨,太陽一定會升起。熵是自信息的一個期望。

H(P) = E_{x... p}[I(x)]=[-\sum_{i}^NP(x_i)logP(x_i)]

自信息:它是衡量單個輸出、單個事件的不確定性,p(x)指一個事件的概率

I(x) = -log[p(x)]

相對熵:也稱KL散度,它是用來衡量兩個分佈之間的差異也就是兩個分佈之間的距離,但它不是一個距離函數,因爲兩個分佈之間的距離不具有對稱性

D_{KL}(P,Q) = E_{x...p} [log\frac{p(x)}{Q(x)}]

其中P爲真實的分佈,Q爲模型輸出的一個分佈,這裏我們需要用Q去擬合、逼近P的分佈,所以其不具有對稱性。

交叉熵:

H(P,Q)=-\sum_{I=1}^{N}P(x_i)log Q(x_i)

D_{KL}(P,Q) =E_{x...p}[log\frac{P(x)}{Q(x)}]

                    =E_{x...p}[logP(x)-logQ(x)]

                   = \sum_{i=1}^{N}[logP(x_i)-logQ(x_i)]

                   = \sum_{I=1}^{N}P(x_i)logP(x_i)-\sum_{i=1}^{N}P(x_i)Q(x_i)

                  = H(P,Q)-H(P)

 故交叉熵爲H(P,Q) = D_{KL}(P,Q) + H(P)。因此,最優化交叉熵也就是最優化相對熵,因爲熵H(P)P是樣本的真實分佈,而Q爲模型輸出分佈,由於訓練集是固定的,所以H(P)爲一個常數,做優化時可以忽略。                  

H(P,Q) =-\sum_{i=1}^{N}P(x_i)logQ(X_i)

loss(x,class) = -log(\frac{exp(x[class])}{\sum_jexp(x[j])}) = -x[class] +log(\sum_jexp(x[j]))

loss(x,class) = weight[class](-x[class]+log(\sum_jexp(x[j])))

下面用代碼進行說明:

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np

# fake data
inputs = torch.tensor([[1, 2], [1, 3], [1, 3]], dtype=torch.float)
target = torch.tensor([0, 1, 1], dtype=torch.long)
flag = 1
if flag:
    # def loss function
    loss_f_none = nn.CrossEntropyLoss(weight=None, reduction='none')
    loss_f_sum = nn.CrossEntropyLoss(weight=None, reduction='sum')
    loss_f_mean = nn.CrossEntropyLoss(weight=None, reduction='mean')

    # forward
    loss_none = loss_f_none(inputs, target)
    loss_sum = loss_f_sum(inputs, target)
    loss_mean = loss_f_mean(inputs, target)

    # view
    print("Cross Entropy Loss:\n ", loss_none, loss_sum, loss_mean)

上面是三種不同模式計算出的loss,第一個逐個元素計算,第二個是所有元素求和,第三個是求平均。

下面用手算的方式進行檢測(只計算第一個樣本):

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    idx = 0

    input_1 = inputs.detach().numpy()[idx]      # [1, 2]
    target_1 = target.numpy()[idx]              # [0]

    # 第一項
    x_class = input_1[target_1]

    # 第二項
    sigma_exp_x = np.sum(list(map(np.exp, input_1)))
    log_sigma_exp_x = np.log(sigma_exp_x)

    # 輸出loss
    loss_1 = -x_class + log_sigma_exp_x

    print("第一個樣本loss爲: ", loss_1)

下面是weight,weight是向量形式,有多少個類別就有多少個元素,每個類別都要設置它的位置

# ----------------------------------- weight -----------------------------------
# flag = 0
flag = 1
if flag:
    # def loss function
    weights = torch.tensor([1, 2], dtype=torch.float)
    # weights = torch.tensor([0.7, 0.3], dtype=torch.float)

    loss_f_none_w = nn.CrossEntropyLoss(weight=weights, reduction='none')
    loss_f_sum = nn.CrossEntropyLoss(weight=weights, reduction='sum')
    loss_f_mean = nn.CrossEntropyLoss(weight=weights, reduction='mean')

    # forward
    loss_none_w = loss_f_none_w(inputs, target)
    loss_sum = loss_f_sum(inputs, target)
    loss_mean = loss_f_mean(inputs, target)

    # view
    print("\nweights: ", weights)
    print(loss_none_w, loss_sum, loss_mean)

上面的結果是不帶weight的loss結果,下面是帶有weight的loss結果;第一個類別的weight是1,所以其loss沒有變化,而第二個類別的weight爲2,所以其loss發生了變化。其中取平均都是加權值的,1.8210 = 1.3133+0.2539+0.2539,0.3642=1.8210/5。故帶權重的均值是不再是除以樣本總數,而是除以權值的份數。

下面通過手算代碼理解權值的份數:

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:
    weights = torch.tensor([1, 2], dtype=torch.float)
    weights_all = np.sum(list(map(lambda x: weights.numpy()[x], target.numpy())))  # [0, 1, 1]  # [1 2 2]

    mean = 0
    loss_sep = loss_none.detach().numpy()
    for i in range(target.shape[0]):

        x_class = target.numpy()[i]
        tmp = loss_sep[i] * (weights.numpy()[x_class] / weights_all)
        mean += tmp

    print(mean)

我們通過debug模式,觀測weight_all的模式:

其中

通過手算方式得到的均值也爲0.3642。

pytorch中的第二個損失函數:

nn.NLLLoss(weight=None,size_average=None,ignore_index=-100,reduce=None,reduction="mean")

功能:實現負對數似然函數中的負號功能

主要參數:

  • weight:各類別的loss設置權值
  • ingnore_index:忽略某個類別
  • reduction:計算模式,可爲none/sum/mean
    • none:逐個元素計算
    • sum:所有元素求和,返回標量
    • mean:加權平均,返回標量

其計算公式爲:

l(x,y) = L=\left\{ l_1,l_2,...,l_N \right\} 

l_n = -w_{y_n}x_{n,y_n}

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np

# fake data
inputs = torch.tensor([[1, 2], [1, 3], [1, 3]], dtype=torch.float)
target = torch.tensor([0, 1, 1], dtype=torch.long)
# ----------------------------------- 2 NLLLoss -----------------------------------
# flag = 0
flag = 1
if flag:

    weights = torch.tensor([1, 1], dtype=torch.float)

    loss_f_none_w = nn.NLLLoss(weight=weights, reduction='none')
    loss_f_sum = nn.NLLLoss(weight=weights, reduction='sum')
    loss_f_mean = nn.NLLLoss(weight=weights, reduction='mean')

    # forward
    loss_none_w = loss_f_none_w(inputs, target)
    loss_sum = loss_f_sum(inputs, target)
    loss_mean = loss_f_mean(inputs, target)

    # view
    print("\nweights: ", weights)
    print("NLL Loss", loss_none_w, loss_sum, loss_mean)

該公式只是實現了一個負號功能。

第三個損失函數:

nn.BCELoss(weight=None,size_average=None,reduce=None,reduction="mean")

功能:二分類交叉熵。注:輸入值取值在[0,1]

主要參數:

  • weight:各類別的loss設置權值
  • ingnore_index:忽略某個類別
  • reduction:計算模式,可爲none/sum/mean
    • none:逐個元素計算
    • sum:所有元素求和,返回標量
    • mean:加權平均,返回標量

它是交叉熵損失函數的一個特例,是二分類交叉熵損失函數,其計算公式爲:

l_n = -w_n[y_n\cdot logx_n +(1-y_n)\cdot(1-x_n)]

它是每個神經元一一對應地計算loss,而不是一整個神經元去計算loss

# ----------------------------------- 3 BCE Loss -----------------------------------
# flag = 0
flag = 1
if flag:
    inputs = torch.tensor([[1, 2], [2, 2], [3, 4], [4, 5]], dtype=torch.float)
    target = torch.tensor([[1, 0], [1, 0], [0, 1], [0, 1]], dtype=torch.float)

    target_bce = target

    # itarget
    inputs = torch.sigmoid(inputs)

    weights = torch.tensor([1, 1], dtype=torch.float)

    loss_f_none_w = nn.BCELoss(weight=weights, reduction='none')
    loss_f_sum = nn.BCELoss(weight=weights, reduction='sum')
    loss_f_mean = nn.BCELoss(weight=weights, reduction='mean')

    # forward
    loss_none_w = loss_f_none_w(inputs, target_bce)
    loss_sum = loss_f_sum(inputs, target_bce)
    loss_mean = loss_f_mean(inputs, target_bce)

    # view
    print("\nweights: ", weights)
    print("BCE Loss", loss_none_w, loss_sum, loss_mean)

上訴報錯是說輸入必須是在[0,1]之間,而我們輸入中有大於1的數,因此我們必須對輸入進行sigmoid處理

我們看到有四個樣本,每個樣本有兩個神經元,因此有8個loss,也就如之前所說每個神經元一一地計算其loss,然後對所有loss求和,求平均。

下面通過手段的模式計算第一個神經元的loss

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    idx = 0

    x_i = inputs.detach().numpy()[idx, idx]
    y_i = target.numpy()[idx, idx]              #

    # loss
    # l_i = -[ y_i * np.log(x_i) + (1-y_i) * np.log(1-y_i) ]      # np.log(0) = nan
    l_i = -y_i * np.log(x_i) if y_i else -(1-y_i) * np.log(1-x_i)

    # 輸出loss
    print("BCE inputs: ", inputs)
    print("第一個loss爲: ", l_i)

第四個損失函數:

nn.BCEWithLogitsLoss(weight=None,size_average=None,reduce=None,reduction="mean",pos_weight=None)

功能:結合sigmoid與二分類交叉熵。注:網絡最後不加sigmoid函數

主要參數:

  • pos_weight:正樣本的權值
  • weight:各類別的loss設置權值
  • ingnore_index:忽略某個類別
  • reduction:計算模式,可爲none/sum/mean
    • none:逐個元素計算
    • sum:所有元素求和,返回標量
    • mean:加權平均,返回標量

其計算公式爲:

l_n = -w_n[y_n*log\sigma(x_n)+(1-y_n)log(1-\sigma(x_n))],其中\sigma爲sigmoid函數。

# ----------------------------------- 4 BCE with Logis Loss -----------------------------------
# flag = 0
flag = 1
if flag:
    inputs = torch.tensor([[1, 2], [2, 2], [3, 4], [4, 5]], dtype=torch.float)
    target = torch.tensor([[1, 0], [1, 0], [0, 1], [0, 1]], dtype=torch.float)

    target_bce = target

    # inputs = torch.sigmoid(inputs)

    weights = torch.tensor([1, 1], dtype=torch.float)

    loss_f_none_w = nn.BCEWithLogitsLoss(weight=weights, reduction='none')
    loss_f_sum = nn.BCEWithLogitsLoss(weight=weights, reduction='sum')
    loss_f_mean = nn.BCEWithLogitsLoss(weight=weights, reduction='mean')

    # forward
    loss_none_w = loss_f_none_w(inputs, target_bce)
    loss_sum = loss_f_sum(inputs, target_bce)
    loss_mean = loss_f_mean(inputs, target_bce)

    # view
    print("\nweights: ", weights)
    print(loss_none_w, loss_sum, loss_mean)

若對輸入再進行一個sigmoid處理後,結果爲:

從上可以看出,loss縮小了,出現了偏差。

下面是加入了pos_weight參數:

# --------------------------------- pos weight

# flag = 0
flag = 1
if flag:
    inputs = torch.tensor([[1, 2], [2, 2], [3, 4], [4, 5]], dtype=torch.float)
    target = torch.tensor([[1, 0], [1, 0], [0, 1], [0, 1]], dtype=torch.float)

    target_bce = target

    # itarget
    # inputs = torch.sigmoid(inputs)

    weights = torch.tensor([1], dtype=torch.float)
    pos_w = torch.tensor([3], dtype=torch.float)        # 3

    loss_f_none_w = nn.BCEWithLogitsLoss(weight=weights, reduction='none', pos_weight=pos_w)
    loss_f_sum = nn.BCEWithLogitsLoss(weight=weights, reduction='sum', pos_weight=pos_w)
    loss_f_mean = nn.BCEWithLogitsLoss(weight=weights, reduction='mean', pos_weight=pos_w)

    # forward
    loss_none_w = loss_f_none_w(inputs, target_bce)
    loss_sum = loss_f_sum(inputs, target_bce)
    loss_mean = loss_f_mean(inputs, target_bce)

    # view
    print("\npos_weights: ", pos_w)
    print(loss_none_w, loss_sum, loss_mean)

對正樣本乘以pos_weight。

 

5. nn.L1Loss(size_average=None,reduce=None,reduction="mean")

功能:計算inputs與target之差的絕對值

l = |x_n - y_n|

6. nn.MSELoss(size_average=None,reduce=None,reduction="mean")

功能:計算inputs與target之差的平方

l = (x_n-y_n)^2

主要參數:

  • reduction:計算模式,可爲none/sum/mean
    • none:逐個元素計算
    • sum:所有元素求和,返回標量
    • mean:加權平均,返回標量
import torch
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
import numpy as np
from tools.common_tools import set_seed

set_seed(1)  # 設置隨機種子

# ------------------------------------------------- 5 L1 loss ----------------------------------------------
# flag = 0
flag = 1
if flag:

    inputs = torch.ones((2, 2))
    target = torch.ones((2, 2)) * 3

    loss_f = nn.L1Loss(reduction='none')
    loss = loss_f(inputs, target)

    print("input:{}\ntarget:{}\nL1 loss:{}".format(inputs, target, loss))# ------------------------------------------------- 6 MSE loss ----------------------------------------------

    loss_f_mse = nn.MSELoss(reduction='none')
    loss_mse = loss_f_mse(inputs, target)

    print("MSE loss:{}".format(loss_mse))

7. SmoothL1Loss(size_average=None,reduce=None,reduction="mean")

功能:平滑L1Loss

loss(x,y) = \frac{1}{n}\sum_iz_i

 

主要參數:

  • reduction:計算模式,可爲none/sum/mean
    • none:逐個元素計算
    • sum:所有元素求和,返回標量
    • mean:加權平均,返回標量
# ------------------------------------------------- 7 Smooth L1 loss ----------------------------------------------
# flag = 0
flag = 1
if flag:
    inputs = torch.linspace(-3, 3, steps=500)
    target = torch.zeros_like(inputs)

    loss_f = nn.SmoothL1Loss(reduction='none')

    loss_smooth = loss_f(inputs, target)

    loss_l1 = np.abs(inputs.numpy())

    plt.plot(inputs.numpy(), loss_smooth.numpy(), label='Smooth L1 Loss')
    plt.plot(inputs.numpy(), loss_l1, label='L1 loss')
    plt.xlabel('x_i - y_i')
    plt.ylabel('loss value')
    plt.legend()
    plt.grid()
    plt.show()

8. PoissonNLLLoss(log_input=True,full=False,size_average=None,eps=1e-8,reduce=None,reduction="mean")

功能:泊松分佈的負對數似然損失函數

log_input = True ,loss(input,target) = exp(input) - target*input

log_input = False, loss(input,target) = input - target*log(input+eps)

主要參數:

  • log_input:輸入是否爲對數形式,決定計算公式
  • full:計算所有loss,默認爲False
  • eps:修正項,避免log(input)爲nan
# ------------------------------------------------- 8 Poisson NLL Loss ----------------------------------------------
# flag = 0
flag = 1
if flag:

    inputs = torch.randn((2, 2))
    target = torch.randn((2, 2))

    loss_f = nn.PoissonNLLLoss(log_input=True, full=False, reduction='none')
    loss = loss_f(inputs, target)
    print("input:{}\ntarget:{}\nPoisson NLL loss:{}".format(inputs, target, loss))

下面通過手動計算第一個神經元的loss:

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    idx = 0

    loss_1 = torch.exp(inputs[idx, idx]) - target[idx, idx]*inputs[idx, idx]

    print("第一個元素loss:", loss_1)

9. nn.KLDivLoss(size_average=None,reduce=None,reduction="mean")

功能:計算KLD(divergence),KL散度,相對熵

注:需提前將輸入計算log-probabilities,如通過nn.logsoftmax()

D_{KL}(P||Q) = E_{x...p}[log\frac{P(x)}{Q(x)}]= E_{x...p}[logP(x)-logQ(x)]

                    = \sum_{i=1}^{N}P(x_i)(logP(x_i)-logQ(x_i))

l_n = y_n *(logy_n - x_n),這是對一個樣本計算loss

主要參數:

  • reduction:計算模式,可爲none/sum/mean/batchmean
    • batchmean:batchsize維度求平均值
    • mean:加權平均,返回標量
    • sum:所有元素求和,返回標量
    • none:逐個元素計算
# ------------------------------ 9 KL Divergence Loss --------------------------
# flag = 0
flag = 1
if flag:

    inputs = torch.tensor([[0.5, 0.3, 0.2], [0.2, 0.3, 0.5]])
    inputs_log = torch.log(inputs)
    target = torch.tensor([[0.9, 0.05, 0.05], [0.1, 0.7, 0.2]], dtype=torch.float)

    loss_f_none = nn.KLDivLoss(reduction='none')
    loss_f_mean = nn.KLDivLoss(reduction='mean')
    loss_f_bs_mean = nn.KLDivLoss(reduction='batchmean')

    loss_none = loss_f_none(inputs, target)
    loss_mean = loss_f_mean(inputs, target)
    loss_bs_mean = loss_f_bs_mean(inputs, target)

    print("loss_none:\n{}\nloss_mean:\n{}\nloss_bs_mean:\n{}".format(loss_none, loss_mean, loss_bs_mean))

上面loss_mean是所有loss求和後,除以6,而batchmean則是所以loss求和後除以2。下面通過手動計算第一個神經元的loss進行驗證:

# --------------------------------- compute by hand------------------
# flag = 0
flag = 1
if flag:

    idx = 0

    loss_1 = target[idx, idx] * (torch.log(target[idx, idx]) - inputs[idx, idx])

    print("第一個元素loss:", loss_1)

10. nn.MarginRankingLoss(margin=0.0,size_average=None,reduce=None,reduction="mean")

功能:計算兩個向量之間的相似度,用於排序任務

特別說明:該方法計算兩組數據之間的差異,返回一個n*n的loss矩陣

 

主要參數:

  • margin:邊界值,x1與x2之間的差異值
  • reduction:計算模式,可爲none/sum/mean

loss = max(0,-y*(x_1-x_2)+margin)

  • y = 1時,希望x1比x2大,當x1>x2時,不產生loss
  • y = -1時,希望x2比x1大,當x2>x1時,不產生loss
# ------------------------------ 10 Margin Ranking Loss -----------------------------------
# flag = 0
flag = 1
if flag:

    x1 = torch.tensor([[1], [2], [3]], dtype=torch.float)
    x2 = torch.tensor([[2], [2], [2]], dtype=torch.float)

    target = torch.tensor([1, 1, -1], dtype=torch.float)

    loss_f_none = nn.MarginRankingLoss(margin=0, reduction='none')

    loss = loss_f_none(x1, x2, target)

    print(loss)

11. nn.MultiLabelMarginLoss(size_average=None,reduce=NOne,reduction="mean")

功能:多標籤邊界損失函數

舉例:時分類任務,樣本x屬於0類和3類,標籤:[0,3,-1,-1],不是[1,0,0,1]

主要參數:

  • reduction:計算模式,可爲none/sum/mean

loss(x,y) = \sum_{ij}\frac{max(0,1-(x[y[j]]-x[i]))}{x.size(0)} ;分母是神經元的個數,分子求max(標籤所在的神經元減去標籤非在的神經元)

where i == 0  to  x.size(0),  j == 0  to  y.size(0) ,  y[j]>= 0,  and i 不等於 y[j] for all i and j .

# ---------------------------------------------- 11 Multi Label Margin Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    x = torch.tensor([[0.1, 0.2, 0.4, 0.8]])
    y = torch.tensor([[0, 3, -1, -1]], dtype=torch.long)

    loss_f = nn.MultiLabelMarginLoss(reduction='none')

    loss = loss_f(x, y)

    print(loss)

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    x = x[0]
    item_1 = (1-(x[0] - x[1])) + (1 - (x[0] - x[2]))    # [0]
    item_2 = (1-(x[3] - x[1])) + (1 - (x[3] - x[2]))    # [3]

    loss_h = (item_1 + item_2) / x.shape[0]

    print(loss_h)

12. nn.SoftMarginLoss(size_average=None,reduce=None,reduction="mean")

功能:計算二分類的logistic損失

主要參數:

  • reduction:計算模式,可爲none/sum/mean

loss(x,y) = \sum_i \frac{log(1+exp(-y[i]*x[i]))}{x.nelement()}

# flag = 0
flag = 1
if flag:

    inputs = torch.tensor([[0.3, 0.7], [0.5, 0.5]])
    target = torch.tensor([[-1, 1], [1, -1]], dtype=torch.float)

    loss_f = nn.SoftMarginLoss(reduction='none')

    loss = loss_f(inputs, target)

    print("SoftMargin: ", loss)
# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    idx = 0

    inputs_i = inputs[idx, idx]
    target_i = target[idx, idx]

    loss_h = np.log(1 + np.exp(-target_i * inputs_i))

    print(loss_h)

13. nn.MultiLabelSoftMarginLoss(weight=None,size_average=None,reduce=None,reduction="mean")

功能:SoftMarginLoss多標籤版本

主要參數:

  • weight:各類別的loss設置權值
  • reduction:計算模式,可爲none/sum/mean

loss(x,y) = -\frac{1}{C}*\sum_iy[i]*log((1+exp(-x[i]))^{-1})+(1-y[i])*log(\frac{exp{-x[i]}}{(1+exp(-x[i]))})

C:標籤的數量

# ---------------------------------------------- 13 MultiLabel SoftMargin Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    inputs = torch.tensor([[0.3, 0.7, 0.8]])
    target = torch.tensor([[0, 1, 1]], dtype=torch.float)

    loss_f = nn.MultiLabelSoftMarginLoss(reduction='none')

    loss = loss_f(inputs, target)

    print("MultiLabel SoftMargin: ", loss)

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    i_0 = torch.log(torch.exp(-inputs[0, 0]) / (1 + torch.exp(-inputs[0, 0])))

    i_1 = torch.log(1 / (1 + torch.exp(-inputs[0, 1])))
    i_2 = torch.log(1 / (1 + torch.exp(-inputs[0, 2])))

    loss_h = (i_0 + i_1 + i_2) / -3

    print(loss_h)

14. nn.MultiMarginLoss(p=1,margin=1.0,weight=None,size_average=None,reduce=None,reduction="mean")

功能:計算多分類的摺頁損失

主要參數:

  • p:可選1或2
  • weight:各類別的loss設置權值
  • margin:邊界值
  • reduction:計算模式,可爲none/sum/mean

loss(x,y) = \frac{\sum_imax(0,margin-x[y]+x[i])^p}{x.size(0)}

where x\in{0,...,x.size(0)-1} ,  y\in{0,...,y.size(0)-1},0=< y[j] =< x.size(0)-1, and i 不等於 y[j] for all i and j.

# ---------------------------------------------- 14 Multi Margin Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    x = torch.tensor([[0.1, 0.2, 0.7], [0.2, 0.5, 0.3]])
    y = torch.tensor([1, 2], dtype=torch.long)

    loss_f = nn.MultiMarginLoss(reduction='none')

    loss = loss_f(x, y)

    print("Multi Margin Loss: ", loss)

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    x = x[0]
    margin = 1

    i_0 = margin - (x[1] - x[0])
    # i_1 = margin - (x[1] - x[1])
    i_2 = margin - (x[1] - x[2])

    loss_h = (i_0 + i_2) / x.shape[0]

    print(loss_h)

15. nn.TripletMarginLoss(margin=1.0,p=2.0,eps=1e-06,swap=False,size_average=None,reduce=None,reduction="mean")

功能:計算三元組損失,人臉驗證中常用

主要參數:

p:範數的階,默認爲2

margin:邊界值

reduction:計算模式,可爲none/sum/mean

L(a,p,n) = max\left\{d(a_i,p_i)-d(a_i,n_i)+margin, 0 \right\}

d(x_i,y_i) = || x_i - y_i||_p

# ---------------------------------------------- 15 Triplet Margin Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    anchor = torch.tensor([[1.]])
    pos = torch.tensor([[2.]])
    neg = torch.tensor([[0.5]])

    loss_f = nn.TripletMarginLoss(margin=1.0, p=1)

    loss = loss_f(anchor, pos, neg)

    print("Triplet Margin Loss", loss)

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:

    margin = 1
    a, p, n = anchor[0], pos[0], neg[0]

    d_ap = torch.abs(a-p)
    d_an = torch.abs(a-n)

    loss = d_ap - d_an + margin

    print(loss)

16. nn.HingeEmbeddingLoss(margin=1.0,size_average=None,reduce=None,reduction="mean")

功能:計算兩個輸入的相似性,常用於非線性embedding和半監督學習

特別注意:輸入x應爲兩個輸入之差的絕對值

主要參數:

  • margin:邊界值
  • reduction:計算模式,可爲none/sum/mean

# ---------------------------------------------- 16 Hinge Embedding Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    inputs = torch.tensor([[1., 0.8, 0.5]])
    target = torch.tensor([[1, 1, -1]])

    loss_f = nn.HingeEmbeddingLoss(margin=1, reduction='none')

    loss = loss_f(inputs, target)

    print("Hinge Embedding Loss", loss)

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:
    margin = 1.
    loss = max(0, margin - inputs.numpy()[0, 2])

    print(loss)

17. nn.CosineEmbeddingLoss(margin=0.0,size_average=None,reduce=None,reduction="mean")

功能:採用餘弦相似度計算兩個輸入的相似性

主要參數:

  • margin:可取值[-1,1],推薦爲[0,0.5]
  • reduction:計算模式,可爲none/sum/mean

# ---------------------------------------------- 17 Cosine Embedding Loss -----------------------------------------
# flag = 0
flag = 1
if flag:

    x1 = torch.tensor([[0.3, 0.5, 0.7], [0.3, 0.5, 0.7]])
    x2 = torch.tensor([[0.1, 0.3, 0.5], [0.1, 0.3, 0.5]])

    target = torch.tensor([[1, -1]], dtype=torch.float)

    loss_f = nn.CosineEmbeddingLoss(margin=0., reduction='none')

    loss = loss_f(x1, x2, target)

    print("Cosine Embedding Loss", loss)

# --------------------------------- compute by hand
# flag = 0
flag = 1
if flag:
    margin = 0.

    def cosine(a, b):
        numerator = torch.dot(a, b)
        denominator = torch.norm(a, 2) * torch.norm(b, 2)
        return float(numerator/denominator)

    l_1 = 1 - (cosine(x1[0], x2[0]))

    l_2 = max(0, cosine(x1[0], x2[0]))

    print(l_1, l_2)

18. nn.CTCLoss(blank=0,reduction="mean",zero_infinity=False)

功能:計算CTC損失,解決時序類數據的分類  Connectionist Temporal Classification

主要參數:

  • blank:blank label
  • zero_infinity:無窮大的值或梯度值0
  • reduction:計算模式,可爲None/sum/mean

# ---------------------------------------------- 18 CTC Loss -----------------------------------------
# flag = 0
flag = 1
if flag:
    T = 50      # Input sequence length
    C = 20      # Number of classes (including blank)
    N = 16      # Batch size
    S = 30      # Target sequence length of longest target in batch
    S_min = 10  # Minimum target length, for demonstration purposes

    # Initialize random batch of input vectors, for *size = (T,N,C)
    inputs = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()

    # Initialize random batch of targets (0 = blank, 1:C = classes)
    target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long)

    input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)
    target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long)

    ctc_loss = nn.CTCLoss()
    loss = ctc_loss(inputs, target, input_lengths, target_lengths)

    print("CTC loss: ", loss)

 

18種損失函數

 

 

 

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章