tensorflow和pytorch比较

pyorch的坑

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [2, 3, 13, 13, 85]], which is output 0 of AsStridedBackward, is at version 6; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
他跟我说good luck????????????????????????????????
反正我找了很多解释方法,比如inplace、+=都改了没B用,看了源码,就是一个简单的element 乘,我很无语。并且神奇的是,在A电脑可以跑,在B电脑不能跑,一个没GPU一个有GPU

tensorflow的坑

tensorflow底层代码不会错,但是写起来很麻烦,一个batchnormalization都要自己写,可读性又是极差。用keras的话,涉及到多gpu时候我感觉里写的很乱,比如并行多gpu
wtih tf.device(cpu0):
model
parallel_model=parallel_model(model)
有些电脑就会报错,但是你改成:
wtih tf.device(cpu0):
model.predict()
就不会报错,我贼蒙蔽。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章