tensorflow出現Incompatible shapes between op input and calculated input gradient.錯誤

Incompatible shapes between op input and calculated input gradient. Forward operation: softmax_cross_entropy_with_logits_sg_12. Input index: 0. Original input shape: (16, 1). Calculated input gradient shape: (16, 16)

也就是輸入和梯度的維度不匹配的問題,可以使用如下策略:

參照如下步驟走:

第一步,修改輸入和輸出的shape如下所示:

X = tf.placeholder(tf.int32, shape=[None,400]
Y = tf.placeholder(tf.float32, shape=[None,1]

Why None because this gives you freedom of feeding any size. This is preferred because while training you want to use mini batch but while predicting or inference time, you will generally feed single thing. Marking it None, takes care of that.

第二步:修改權重初始化值

Correct your weight initialization, you are feeding in random values, they may be negatives too. It is always recommended to initialize with slight positive value. (I see you are using relu as activation, the Gradient of which is zero for negative weight values, so those weights are never updated in Gradient descent, in other words those are useless weights)

第三步,最後一層的激活層的使用

Logits are result you obtain from W2*x + b2. And that tf.nn.softmax_cross.....(..) automatically applied softmax activation. So no need of SeLu for last layer.

或者修改shape的第一維爲-1,如下例子:

You must change shape=(100, 16, 16, 1) to shape=(-1, 16, 16, 1) "indicated -1 for batch size, which specifies that this dimension should be dynamically computed based on the number of input values in iputs"

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章