先附上官方文檔說明:https://pytorch.org/docs/stable/nn.functional.html
torch.nn.functional.
kl_div
(input, target, size_average=None, reduce=None, reduction='mean')Parameters
input – Tensor of arbitrary shape
target – Tensor of the same shape as input
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'batchmean'
|'sum'
|'mean'
.'none'
: no reduction will be applied'batchmean'
: the sum of the output will be divided by the batchsize'sum'
: the output will be summed'mean'
: the output will be divided by the number of elements in the output Default:'mean'
然後看看怎麼用:第一個參數傳入的是一個對數概率矩陣,第二個參數傳入的是概率矩陣。這裏很重要,不然求出來的kl散度可能是個負值。
比如現在我有兩個矩陣X, Y。因爲kl散度具有不對稱性,存在一個指導和被指導的關係,因此這連個矩陣輸入的順序需要確定一下。
舉個例子:如果現在想用Y指導X,第一個參數要傳X,第二個要傳Y。就是被指導的放在前面,然後求相應的概率和對數概率就可以了。
import torch
import torch.nn.functional as F
# 定義兩個矩陣
x = torch.randn((4, 5))
y = torch.randn((4, 5))
# 因爲要用y指導x,所以求x的對數概率,y的概率
logp_x = F.log_softmax(x, dim=-1)
y = F.softmax(y, dim=-1)
kl_sum = F.kl_div(logp_x, p_y, reduction='sum')
kl_mean = F.kl_div(logp_x, p_y, reduction='mean')
print(kl_sum, kl_mean)
>>> tensor(3.4165) tensor(0.1708)