pytorch 計算 kl散度 F.kl_div()

先附上官方文檔說明:https://pytorch.org/docs/stable/nn.functional.html

torch.nn.functional.kl_div(inputtargetsize_average=Nonereduce=Nonereduction='mean')

Parameters

  • input – Tensor of arbitrary shape

  • target – Tensor of the same shape as input

  • size_average (booloptional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True

  • reduce (booloptional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True

  • reduction (stringoptional) – Specifies the reduction to apply to the output: 'none' | 'batchmean' | 'sum' | 'mean''none': no reduction will be applied 'batchmean': the sum of the output will be divided by the batchsize 'sum': the output will be summed 'mean': the output will be divided by the number of elements in the output Default: 'mean'

然後看看怎麼用:第一個參數傳入的是一個對數概率矩陣,第二個參數傳入的是概率矩陣。這裏很重要,不然求出來的kl散度可能是個負值

比如現在我有兩個矩陣X, Y。因爲kl散度具有不對稱性,存在一個指導和被指導的關係,因此這連個矩陣輸入的順序需要確定一下。

舉個例子:如果現在想用Y指導X,第一個參數要傳X,第二個要傳Y。就是被指導的放在前面,然後求相應的概率和對數概率就可以了。

import torch
import torch.nn.functional as F

# 定義兩個矩陣
x = torch.randn((4, 5))
y = torch.randn((4, 5))

# 因爲要用y指導x,所以求x的對數概率,y的概率
logp_x = F.log_softmax(x, dim=-1)
y = F.softmax(y, dim=-1)


kl_sum = F.kl_div(logp_x, p_y, reduction='sum')
kl_mean = F.kl_div(logp_x, p_y, reduction='mean')

print(kl_sum, kl_mean)


>>> tensor(3.4165) tensor(0.1708)

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章