PyTorch——BUG記錄與解決方法

BUG 1

THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1524586445097/work/aten/src/THC/THCGeneral.cpp line=844 error=11 : invalid argument

BUG 2

ValueError: Expected more than 1 value per channel when training, got input size [1, 512, 1, 1]

這個是在使用 BatchNorm 時不能把batchsize設置爲1,一個樣本的話y = (x - mean(x)) / (std(x) + eps)的計算中,xmean(x)導致輸出爲0,注意這個情況是在feature map爲1的情況時,纔可能出現xmean(x)。

Most likely you have a nn.BatchNorm layer somewhere in your model, which expects more then 1 value to calculate the running mean and std of the current batch.
In case you want to validate your data, call model.eval() before feeding the data, as this will change the behavior of the BatchNorm layer to use the running estimates instead of calculating them for the current batch.
If you want to train your model and can’t use a bigger batch size, you could switch e.g. to InstanceNorm.

在這裏插入圖片描述

參考:https://blog.csdn.net/u011276025/article/details/73826562

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章