Keras自帶Loss Functions研究

本文研究Keras自帶的幾個常用的Loss Functions。

1. categorical_crossentropy VS. sparse_categorical_crossentropy

先看categorical_crossentropy和sparse_categorical_crossentropy。
在這裏插入圖片描述
在這裏插入圖片描述

注意到二者的主要差別在於輸入是否爲integer tensor。在文檔中,我們還可以找到關於二者如何選擇的描述:
在這裏插入圖片描述
解釋一下這裏的Integer target 與 Categorical target,實際上Integer target經過獨熱編碼就變成了Categorical target,舉例說明:

(類別數5)
Integer target: [1,2,4]
Categorical target: [[0. 1. 0. 0. 0.]
					 [0. 0. 1. 0. 0.]
					 [0. 0. 0. 0. 1.]]

在Keras中提供了to_categorical方法來實現二者的轉化:

from keras.utils import to_categorical
categorical_labels = to_categorical(int_labels, num_classes=None)

注意categorical_crossentropy和sparse_categorical_crossentropy的輸入參數output,都是softmax輸出的tensor。我們都知道softmax的輸出服從多項分佈,
因此categorical_crossentropy和sparse_categorical_crossentropy應當應用於多分類問題。
我們再看看這兩個的源碼,來驗證一下:

https://github.com/tensorflow/tensorflow/blob/r1.13/tensorflow/python/keras/backend.py
--------------------------------------------------------------------------------------------------------------------
def categorical_crossentropy(target, output, from_logits=False, axis=-1):
  """Categorical crossentropy between an output tensor and a target tensor.
  Arguments:
      target: A tensor of the same shape as `output`.
      output: A tensor resulting from a softmax
          (unless `from_logits` is True, in which
          case `output` is expected to be the logits).
      from_logits: Boolean, whether `output` is the
          result of a softmax, or is a tensor of logits.
      axis: Int specifying the channels axis. `axis=-1` corresponds to data
          format `channels_last', and `axis=1` corresponds to data format
          `channels_first`.
  Returns:
      Output tensor.
  Raises:
      ValueError: if `axis` is neither -1 nor one of the axes of `output`.
  """
  rank = len(output.shape)
  axis = axis % rank
  # Note: nn.softmax_cross_entropy_with_logits_v2
  # expects logits, Keras expects probabilities.
  if not from_logits:
    # scale preds so that the class probas of each sample sum to 1
    output = output / math_ops.reduce_sum(output, axis, True)
    # manual computation of crossentropy
    epsilon_ = _to_tensor(epsilon(), output.dtype.base_dtype)
    output = clip_ops.clip_by_value(output, epsilon_, 1. - epsilon_)
    return -math_ops.reduce_sum(target * math_ops.log(output), axis)
  else:
    return nn.softmax_cross_entropy_with_logits_v2(labels=target, logits=output)

--------------------------------------------------------------------------------------------------------------------
def sparse_categorical_crossentropy(target, output, from_logits=False, axis=-1):
  """Categorical crossentropy with integer targets.
  Arguments:
      target: An integer tensor.
      output: A tensor resulting from a softmax
          (unless `from_logits` is True, in which
          case `output` is expected to be the logits).
      from_logits: Boolean, whether `output` is the
          result of a softmax, or is a tensor of logits.
      axis: Int specifying the channels axis. `axis=-1` corresponds to data
          format `channels_last', and `axis=1` corresponds to data format
          `channels_first`.
  Returns:
      Output tensor.
  Raises:
      ValueError: if `axis` is neither -1 nor one of the axes of `output`.
  """
  rank = len(output.shape)
  axis = axis % rank
  if axis != rank - 1:
    permutation = list(range(axis)) + list(range(axis + 1, rank)) + [axis]
    output = array_ops.transpose(output, perm=permutation)

  # Note: nn.sparse_softmax_cross_entropy_with_logits
  # expects logits, Keras expects probabilities.
  if not from_logits:
    epsilon_ = _to_tensor(epsilon(), output.dtype.base_dtype)
    output = clip_ops.clip_by_value(output, epsilon_, 1 - epsilon_)
    output = math_ops.log(output)

  output_shape = output.shape
  targets = cast(flatten(target), 'int64')
  logits = array_ops.reshape(output, [-1, int(output_shape[-1])])
  res = nn.sparse_softmax_cross_entropy_with_logits(
      labels=targets, logits=logits)
  if len(output_shape) >= 3:
    # If our output includes timesteps or spatial dimensions we need to reshape
    return array_ops.reshape(res, array_ops.shape(output)[:-1])
  else:
    return res

categorical_crossentropy計算交叉熵時使用的是nn.softmax_cross_entropy_with_logits_v2(
labels=targets, logits=logits),而sparse_categorical_crossentropy使用的是nn.sparse_softmax_cross_entropy_with_logits(
labels=targets, logits=logits),二者本質並無區別,只是對輸入參數logits的要求不同,v2要求的是logits與labels格式相同(即元素也是獨熱的),而sparse則要求logits的元素是個數值,與上面Integer format和Categorical format的對比含義類似。

綜上所述,categorical_crossentropy和sparse_categorical_crossentropy只不過是輸入參數target形式上的區別,其loss的計算在本質上沒有區別,就是交叉熵;二者是針對多分類任務的。

2. Binary_crossentropy

在這裏插入圖片描述
二元交叉熵,從名字中我們可以看出,這個loss function可能是適用於二分類的。文檔中並沒有詳細說明,那麼直接看看源碼吧:

https://github.com/tensorflow/tensorflow/blob/r1.13/tensorflow/python/keras/backend.py
--------------------------------------------------------------------------------------------------------------------
def binary_crossentropy(target, output, from_logits=False):
  """Binary crossentropy between an output tensor and a target tensor.
  Arguments:
      target: A tensor with the same shape as `output`.
      output: A tensor.
      from_logits: Whether `output` is expected to be a logits tensor.
          By default, we consider that `output`
          encodes a probability distribution.
  Returns:
      A tensor.
  """
  # Note: nn.sigmoid_cross_entropy_with_logits
  # expects logits, Keras expects probabilities.
  if not from_logits:
    # transform back to logits
    epsilon_ = _to_tensor(epsilon(), output.dtype.base_dtype)
    output = clip_ops.clip_by_value(output, epsilon_, 1 - epsilon_)
    output = math_ops.log(output / (1 - output))
  return nn.sigmoid_cross_entropy_with_logits(labels=target, logits=output)

可以看到源碼中計算使用了nn.sigmoid_cross_entropy_with_logits,熟悉tensorflow的應該比較熟悉這個損失函數了,它可以用於簡單的二分類,也可以用於多標籤任務,而且應用廣泛,在樣本合理的情況下(如不存在類別不均衡等問題)的情況下,通常可以直接使用。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章