Caffe_LossLayers

1. Multinomial Logistic Loss
2. Infogain Loss - a generalization of MultinomialLogisticLossLayer.
3. Softmax with Loss - computes the multinomial logistic loss of the softmax of its inputs. It’s conceptually identical to a softmax layer followed by a multinomial logistic loss layer, but provides a more numerically stable gradient.


4. Sum-of-Squares / Euclidean - computes the sum of squares of differences of its two inputs, 12N∑Ni=1∥x1i-x2i∥22.
5. Hinge / Margin - The hinge loss layer computes a one-vsall hinge (L1) or squared hinge loss (L2).
6. Sigmoid Cross-Entropy Loss - computes the cross-entropy (logistic) loss, often used for predicting targets interpreted as probabilities.


7. Accuracy / Top-k layer - scores the output as an accuracy with respect to target – it is not actually a loss and has no backward step.
8. Contrastive Loss

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章